[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434289#comment-15434289 ] Hadoop QA commented on HADOOP-13538: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 21s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 8m 21s{color} | {color:red} root generated 1 new + 710 unchanged - 0 fixed = 711 total (was 710) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 41s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 9s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825197/HADOOP-13538.002.patch | | JIRA Issue | HADOOP-13538 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 141d1c0b80a7 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c37346d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/artifact/patchprocess/diff-compile-javac-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10352/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Deprecate getInstance and initiate methods with Path in TrashPolicy > ---
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434286#comment-15434286 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 9 new + 43 unchanged - 2 fixed = 52 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 34s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 2s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ipc.TestRPC | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825198/HADOOP-13055.02.patch | | JIRA Issue | HADOOP-13055 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7ae5c6096915 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c37346d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10353/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: htt
[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434252#comment-15434252 ] Hadoop QA commented on HADOOP-13433: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 8s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 96 unchanged - 2 fixed = 97 total (was 98) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 17s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 43m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825196/HADOOP-13433-v1.patch | | JIRA Issue | HADOOP-13433 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux b6ddca51a2a4 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c37346d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10351/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10351/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10351/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Pro
[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13538: --- Attachment: HADOOP-13538.002.patch Sorry, I missed for that, upload the v002 patch again. > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13538: --- Attachment: (was: HADOOP-13538.002.patch) > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434234#comment-15434234 ] Akira Ajisaka commented on HADOOP-13538: Would you update the javadoc in getInstance and TrasnPolicyDefault#initialize as well? > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-13055: --- Attachment: HADOOP-13055.02.patch Updating patch to fix unit test failure, and improve {{resolve]} logic. > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch, > HADOOP-13055.02.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434223#comment-15434223 ] Hadoop QA commented on HADOOP-13539: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 6s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825194/HADOOP-13539.01.patch | | JIRA Issue | HADOOP-13539 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 75d56845ef08 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c37346d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10350/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10350/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Che
[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13538: --- Attachment: HADOOP-13538.002.patch Thanks [~ajisakaa] for the review. Attach a new patch to make this change. > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch, HADOOP-13538.002.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13529) Do some code refactoring
[ https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13529: --- Description: 1. argument and variant naming 2. utility class 3. add some comments 4. adjust some configuration 5. fix TODO was: 1. argument and variant naming 2. utility class 3. add some comments 4. adjust some configuration > Do some code refactoring > > > Key: HADOOP-13529 > URL: https://issues.apache.org/jira/browse/HADOOP-13529 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > > 1. argument and variant naming > 2. utility class > 3. add some comments > 4. adjust some configuration > 5. fix TODO -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13529) Do some code refactoring
[ https://issues.apache.org/jira/browse/HADOOP-13529?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13529: --- Description: 1. argument and variant naming 2. utility class 3. add some comments 4. adjust some configuration was: 1. argument and variant naming 2. utility class > Do some code refactoring > > > Key: HADOOP-13529 > URL: https://issues.apache.org/jira/browse/HADOOP-13529 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > > 1. argument and variant naming > 2. utility class > 3. add some comments > 4. adjust some configuration -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HADOOP-13433: --- Attachment: HADOOP-13433-v1.patch checkstyle. > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HADOOP-13433-v1.patch, HADOOP-13433.patch > > > This is a problem that has troubled us for several years. For our HBase > cluster, sometimes the RS will be stuck due to > {noformat} > 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception > encountered while connecting to the server : > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: The ticket > isn't for us (35) - BAD TGS SERVER NAME)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) > at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) > at org.apache.hadoop.hbase.security.User.call(User.java:607) > at org.apache.hadoop.hbase.security.User.access$700(User.java:51) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321) > at > org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164) > at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004) > at > org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107) > at $Proxy24.replicateLogEntries(Unknown Source) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515) > Caused by: GSSException: No valid credentials provided (Mechanism level: The > ticket isn't for us (35) - BAD TGS SERVER NAME) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180) > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175) > ... 23 more > Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64) > at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185) > at > sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294) > at > sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106) > at > sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594) > ... 26 more > Caused by: KrbException: Identifier doesn't match expected value (906) > at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133) > at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58) > at sun.security.krb5.internal.TGSRep.(TGSRep.java:53) > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46) > ... 31 more > {noformat} > It rarely happens, but if it happens, the regionserver will be stuck and can > never recover. > Recently we added a log after a successful re-login which prints the
[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434201#comment-15434201 ] Akira Ajisaka commented on HADOOP-13538: Cancelling my +1. Would you update the javadoc to document the alternative as follows? {code} * @deprecated Use {@link #initialize(Configuration, FileSystem)} instead. {code} > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13539: --- Attachment: HADOOP-13539.01.patch Patch 1 to express the idea. Current behavior seems pretty odd: fail to remove master key = debug log; fail to remove a token = RTE. New behavior is to log a warn with some description indicating this maybe harmless, with the actual exception. [~andrew.wang], would you have time to review this? Thanks much! > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13539.01.patch > > > In {{ZKDelegationTokenSecretManager}}, the 2 methods > {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet > handles exception differently. We should not throw RTE if a node cannot be > removed - logging is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13539: --- Status: Patch Available (was: Open) > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13539.01.patch > > > In {{ZKDelegationTokenSecretManager}}, the 2 methods > {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet > handles exception differently. We should not throw RTE if a node cannot be > removed - logging is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13539: --- Description: In {{ZKDelegationTokenSecretManager}}, the 2 methods {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet handles exception differently. We should not throw RTE if a node cannot be removed - logging is enough. (was: In {{ZKDelegationTokenSecretManager}}, the 2 methods {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet handles exception differently. We should not throw RTE if a node cannot be removed - error logging is enough.) > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > > In {{ZKDelegationTokenSecretManager}}, the 2 methods > {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet > handles exception differently. We should not throw RTE if a node cannot be > removed - logging is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434184#comment-15434184 ] Xiao Chen commented on HADOOP-13539: A sample exception is: {noformat} 2016-08-23 21:34:50,732 ERROR org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager: ExpiredTokenRemover thread received unexpected exception java.lang.RuntimeException: Could not remove Stored Token ZKDTSMDelegationToken_3 at org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.removeStoredToken(ZKDelegationTokenSecretManager.java:821) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.removeExpiredToken(AbstractDelegationTokenSecretManager.java:605) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager.access$400(AbstractDelegationTokenSecretManager.java:54) at org.apache.hadoop.security.token.delegation.AbstractDelegationTokenSecretManager$ExpiredTokenRemover.run(AbstractDelegationTokenSecretManager.java:656) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /zkdtsm/ZKDTSMRoot/ZKDTSMTokensRoot/DT_3 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.delete(ZooKeeper.java:873) at org.apache.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:238) at org.apache.curator.framework.imps.DeleteBuilderImpl$5.call(DeleteBuilderImpl.java:233) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.DeleteBuilderImpl.pathInForeground(DeleteBuilderImpl.java:230) at org.apache.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:214) at org.apache.curator.framework.imps.DeleteBuilderImpl.forPath(DeleteBuilderImpl.java:41) at org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager.removeStoredToken(ZKDelegationTokenSecretManager.java:815) ... 4 more {noformat} > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > > In {{ZKDelegationTokenSecretManager}}, the 2 methods > {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet > handles exception differently. We should not throw RTE if a node cannot be > removed - error logging is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
[ https://issues.apache.org/jira/browse/HADOOP-13539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13539: --- Description: In {{ZKDelegationTokenSecretManager}}, the 2 methods {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet handles exception differently. We should not throw RTE if a node cannot be removed - error logging is enough. > KMS's zookeeper-based secret manager should be consistent when failed to > remove node > > > Key: HADOOP-13539 > URL: https://issues.apache.org/jira/browse/HADOOP-13539 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Xiao Chen >Assignee: Xiao Chen > > In {{ZKDelegationTokenSecretManager}}, the 2 methods > {{removeStoredMasterKey}} and {{removeStoredToken}} are very much alike, yet > handles exception differently. We should not throw RTE if a node cannot be > removed - error logging is enough. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13539) KMS's zookeeper-based secret manager should be consistent when failed to remove node
Xiao Chen created HADOOP-13539: -- Summary: KMS's zookeeper-based secret manager should be consistent when failed to remove node Key: HADOOP-13539 URL: https://issues.apache.org/jira/browse/HADOOP-13539 Project: Hadoop Common Issue Type: Bug Components: kms Affects Versions: 2.6.0 Reporter: Xiao Chen Assignee: Xiao Chen -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434174#comment-15434174 ] Akira Ajisaka commented on HADOOP-13538: +1, thanks Yiqun. > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-13498: --- Hadoop Flags: Reviewed > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, > HADOOP-13498-HADOOP-12756.004.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13345) S3Guard: Improved Consistency for S3A
[ https://issues.apache.org/jira/browse/HADOOP-13345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434139#comment-15434139 ] Aaron Fabbri commented on HADOOP-13345: --- Having the MetadataStore interface is an important first step for us to parallelize our effort here. Thanks again Chris for getting that first patch out. I still have questions about the subtasks though. There is still some fuzziness with respect to the policy part. (We may want to have a conf. call to discuss--and I'm open tomorrow.) I've been thinking about policy a little and I believe: - Allowing MetadataStore implementations to opt in/out of being source of truth is important. Implementations may wish to opt out based on implementation complexity, or lack of transactions for underlying store, or policy (LRU discard). - Allowing the client to opt out of relying on MetadataStore as source of truth is also desirable. Workloads that add files outside of hadoop, for example. And opting out is less risky while we stabilize the codebase. This implies some configuration parameters (ignoring the naming for now--I assume a future where this is factored out of s3a for any FS client to utilize) fs..metadatastore.allow.authoritative - If true, allow configured metadata store (if any) to be source of truth on cached file metadata and directory listings. - If true, but configured metadata store does not support being authoritative, this setting will have no effect, as the MetadataStore will always return results marked as non-authoritative. fs..metadatastore.class - Configure which MetadataStore implementation to use, if any. - This may replace fs.s3a.s3guard.enabled proposed in doc? fs.metadatastore..fullycache.directories - If the metadata store implementation supports being authoritative on directory listings, this will cause it to return DirectoryListMetadata (name tbd) results with fullyCached=true when it has complete directory listing. - If metadata store implementation does not support this, it should log an error. Client will work correctly as implementation will never claim to fully cache listings / PathMetadata. We could name this authoritative.directories instead.. We could also add an analogue for files: ...authoritative.files as well. In my prototype I assumed get() on a single Path could always be authoritative. I could go either way. Thoughts? > S3Guard: Improved Consistency for S3A > - > > Key: HADOOP-13345 > URL: https://issues.apache.org/jira/browse/HADOOP-13345 > Project: Hadoop Common > Issue Type: New Feature > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13345.prototype1.patch, > S3C-ConsistentListingonS3-Design.pdf, S3GuardImprovedConsistencyforS3A.pdf, > S3GuardImprovedConsistencyforS3AV2.pdf, s3c.001.patch > > > This issue proposes S3Guard, a new feature of S3A, to provide an option for a > stronger consistency model than what is currently offered. The solution > coordinates with a strongly consistent external store to resolve > inconsistencies caused by the S3 eventual consistency model. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13498: --- Resolution: Fixed Status: Resolved (was: Patch Available) > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, > HADOOP-13498-HADOOP-12756.004.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434081#comment-15434081 ] shimingfei commented on HADOOP-13498: - [~uncleGen] I have merged this. Thanks! > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, > HADOOP-13498-HADOOP-12756.004.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics
[ https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434055#comment-15434055 ] Aaron Fabbri commented on HADOOP-13065: --- Ooops.. Not sure how I accidentally clicked Assign To Me.. thanks for fixing that [~hitesh] > Add a new interface for retrieving FS and FC Statistics > --- > > Key: HADOOP-13065 > URL: https://issues.apache.org/jira/browse/HADOOP-13065 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Ram Venkatesh >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, > HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, > HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, > HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, > HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, > TestStatisticsOverhead.java > > > Currently FileSystem.Statistics exposes the following statistics: > BytesRead > BytesWritten > ReadOps > LargeReadOps > WriteOps > These are in-turn exposed as job counters by MapReduce and other frameworks. > There is logic within DfsClient to map operations to these counters that can > be confusing, for instance, mkdirs counts as a writeOp. > Proposed enhancement: > Add a statistic for each DfsClient operation including create, append, > createSymlink, delete, exists, mkdirs, rename and expose them as new > properties on the Statistics object. The operation-specific counters can be > used for analyzing the load imposed by a particular job on HDFS. > For example, we can use them to identify jobs that end up creating a large > number of files. > Once this information is available in the Statistics object, the app > frameworks like MapReduce can expose them as additional counters to be > aggregated and recorded as part of job summary. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434041#comment-15434041 ] Genmao Yu commented on HADOOP-13498: [~mingfei] new patch is available and the result of unit test is: {code} [INFO] Scanning for projects... [INFO] [INFO] [INFO] Building Apache Hadoop Aliyun OSS support 3.0.0-alpha2-SNAPSHOT [INFO] [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-aliyun --- [INFO] Deleting /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/target [INFO] [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-aliyun --- [INFO] Executing tasks main: [mkdir] Created dir: /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/target/test-dir [INFO] Executed tasks [INFO] [INFO] --- maven-remote-resources-plugin:1.5:process (default) @ hadoop-aliyun --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ hadoop-aliyun --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/src/main/resources [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-aliyun --- [INFO] Compiling 7 source files to /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/target/classes [INFO] [INFO] --- maven-dependency-plugin:2.2:list (deplist) @ hadoop-aliyun --- [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ hadoop-aliyun --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 5 resources [INFO] Copying 2 resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ hadoop-aliyun --- [INFO] Compiling 12 source files to /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.17:test (default-test) @ hadoop-aliyun --- [INFO] Surefire report directory: /home/yugm/apps/hadoop/hadoop-tools/hadoop-aliyun/target/surefire-reports --- T E S T S --- --- T E S T S --- Running org.apache.hadoop.fs.aliyun.oss.TestOSSTemporaryCredentials Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.433 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSTemporaryCredentials Running org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.912 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSOutputStream Running org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract Tests run: 46, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 26.263 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSFileSystemContract Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.221 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractRename Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.503 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractMkdir Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 6.154 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractDelete Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.777 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractOpen Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate Tests run: 6, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 4.031 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractCreate Running org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 9.896 sec - in org.apache.hadoop.fs.aliyun.oss.contract.TestOSSContractSeek Running org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.736 sec - in org.apache.hadoop.fs.aliyun.oss.TestOSSInputStream Results : Tests run: 101, Failures: 0, Errors: 0, Skipped: 1 [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 01:34 min [INFO] Finished at: 2016-08-24T09:44:40+08:00 [INFO] Final Memory: 34M
[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15434014#comment-15434014 ] Hadoop QA commented on HADOOP-13538: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 7s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 7s{color} | {color:red} root generated 1 new + 710 unchanged - 0 fixed = 711 total (was 710) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 17m 5s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 48m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Timed out junit tests | org.apache.hadoop.http.TestHttpServerLifecycle | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825165/HADOOP-13538.001.patch | | JIRA Issue | HADOOP-13538 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 13eaad5b3849 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c37346d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | javac | https://builds.apache.org/job/PreCommit-HADOOP-Build/10349/artifact/patchprocess/diff-compile-javac-root.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10349/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10349/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10349/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Deprecate getInstance and initiate methods with Path in TrashP
[jira] [Commented] (HADOOP-13396) Allow pluggable audit loggers in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433999#comment-15433999 ] Andrew Wang commented on HADOOP-13396: -- LGTM +1, thanks Xiao. Looking forward to your JSON logger followup. > Allow pluggable audit loggers in KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, > HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, > HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch, > HADOOP-13396.09.patch > > > Currently, KMS audit log is using log4j, to write a text format log. > We should refactor this, so that people can easily add new format audit logs. > The current text format log should be the default, and all of its behavior > should remain compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code
[ https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433952#comment-15433952 ] Yiqun Lin commented on HADOOP-13534: Hi, [~ajisakaa], thanks for the comments. {quote} would you file another jira to deprecate the methods before removing them? {quote} Done. Filed the jira HADOOP-13538 to track this. > Remove unused TrashPolicy#getInstance and initiate code > --- > > Key: HADOOP-13534 > URL: https://issues.apache.org/jira/browse/HADOOP-13534 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Zhe Zhang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-9785.001.patch > > > A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate}} APIs > with Path is not used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics
[ https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hitesh Shah updated HADOOP-13065: - Assignee: Mingliang Liu (was: Aaron Fabbri) > Add a new interface for retrieving FS and FC Statistics > --- > > Key: HADOOP-13065 > URL: https://issues.apache.org/jira/browse/HADOOP-13065 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Ram Venkatesh >Assignee: Mingliang Liu > Fix For: 2.8.0 > > Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, > HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, > HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, > HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, > HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, > TestStatisticsOverhead.java > > > Currently FileSystem.Statistics exposes the following statistics: > BytesRead > BytesWritten > ReadOps > LargeReadOps > WriteOps > These are in-turn exposed as job counters by MapReduce and other frameworks. > There is logic within DfsClient to map operations to these counters that can > be confusing, for instance, mkdirs counts as a writeOp. > Proposed enhancement: > Add a statistic for each DfsClient operation including create, append, > createSymlink, delete, exists, mkdirs, rename and expose them as new > properties on the Statistics object. The operation-specific counters can be > used for analyzing the load imposed by a particular job on HDFS. > For example, we can use them to identify jobs that end up creating a large > number of files. > Once this information is available in the Statistics object, the app > frameworks like MapReduce can expose them as additional counters to be > aggregated and recorded as part of job summary. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433942#comment-15433942 ] Yiqun Lin edited comment on HADOOP-13538 at 8/24/16 12:38 AM: -- Attach a simple patch for this. Since I am not a contributor for HADOOP-COMMON, I can't assign this jira to me. was (Author: linyiqun): Attach a simple patch for this. > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13538: --- Attachment: HADOOP-13538.001.patch > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HADOOP-13538: --- Status: Patch Available (was: Open) > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > Attachments: HADOOP-13538.001.patch > > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
Yiqun Lin created HADOOP-13538: -- Summary: Deprecate getInstance and initiate methods with Path in TrashPolicy Key: HADOOP-13538 URL: https://issues.apache.org/jira/browse/HADOOP-13538 Project: Hadoop Common Issue Type: Improvement Reporter: Yiqun Lin Priority: Minor As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13538) Deprecate getInstance and initiate methods with Path in TrashPolicy
[ https://issues.apache.org/jira/browse/HADOOP-13538?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433942#comment-15433942 ] Yiqun Lin commented on HADOOP-13538: Attach a simple patch for this. > Deprecate getInstance and initiate methods with Path in TrashPolicy > --- > > Key: HADOOP-13538 > URL: https://issues.apache.org/jira/browse/HADOOP-13538 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Yiqun Lin >Priority: Minor > > As HADOOP-13534 memtioned, the getInstance and initiate APIs with Path is not > used anymore. We should deprecate these methods before removing them. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md
[ https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433914#comment-15433914 ] Yuanbo Liu edited comment on HADOOP-13497 at 8/24/16 12:16 AM: --- [~iwasakims] Thank you very much ! was (Author: yuanbo): [~iwasakims] Thanks you very much ! > fix wrong command in CredentialProviderAPI.md > - > > Key: HADOOP-13497 > URL: https://issues.apache.org/jira/browse/HADOOP-13497 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu >Priority: Trivial > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13497.001.patch > > > In CredentialProviderAPI.md line 122 > {quote} > Example: `hadoop credential create ssl.server.keystore.password > jceks://file/tmp/test.jceks` > {quote} > should be > {quote} > Example: `hadoop credential create ssl.server.keystore.password -provider > jceks://file/tmp/test.jceks` > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md
[ https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433914#comment-15433914 ] Yuanbo Liu commented on HADOOP-13497: - [~iwasakims] Thanks you very much ! > fix wrong command in CredentialProviderAPI.md > - > > Key: HADOOP-13497 > URL: https://issues.apache.org/jira/browse/HADOOP-13497 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu >Priority: Trivial > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13497.001.patch > > > In CredentialProviderAPI.md line 122 > {quote} > Example: `hadoop credential create ssl.server.keystore.password > jceks://file/tmp/test.jceks` > {quote} > should be > {quote} > Example: `hadoop credential create ssl.server.keystore.password -provider > jceks://file/tmp/test.jceks` > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12765: --- Fix Version/s: 2.9.0 > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433872#comment-15433872 ] Zhe Zhang commented on HADOOP-12765: I committed to branch-2 and branch-2.8. But backporting to branch-2.7 is having a conflict on the pom files. [~mshen] [~jojochuang] Could you help take a look? Thanks. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13521) SampleQuantile does not perform well under load
[ https://issues.apache.org/jira/browse/HADOOP-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433859#comment-15433859 ] Andrew Wang commented on HADOOP-13521: -- We should probably rip out MutableQuantiles and replace it with HdrHistogram, in retrospect MQ is over-engineered. > SampleQuantile does not perform well under load > --- > > Key: HADOOP-13521 > URL: https://issues.apache.org/jira/browse/HADOOP-13521 > Project: Hadoop Common > Issue Type: Bug > Components: metrics >Affects Versions: 2.3.0 >Reporter: Mark Wagner > Attachments: Screen Shot 2015-07-09 at 12.37.29 PM.png > > > After adding quantile collection to one of our clusters we saw much higher > latency for RPCs. This was traced down to the quantile collection. Samples > are being buffered and inserted in groups of 500. After the buffered samples > are inserted, the entire set of samples for this time period (600 seconds at > the longest for us) is "compressed". > All operations for RPC metrics are synchronized. Usually this isn't an issue > but it seems that this compression operation is taking a significant amount > of time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13065) Add a new interface for retrieving FS and FC Statistics
[ https://issues.apache.org/jira/browse/HADOOP-13065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aaron Fabbri reassigned HADOOP-13065: - Assignee: Aaron Fabbri (was: Mingliang Liu) > Add a new interface for retrieving FS and FC Statistics > --- > > Key: HADOOP-13065 > URL: https://issues.apache.org/jira/browse/HADOOP-13065 > Project: Hadoop Common > Issue Type: New Feature > Components: fs >Reporter: Ram Venkatesh >Assignee: Aaron Fabbri > Fix For: 2.8.0 > > Attachments: HADOOP-13065-007.patch, HADOOP-13065.008.patch, > HADOOP-13065.009.patch, HADOOP-13065.010.patch, HADOOP-13065.011.patch, > HADOOP-13065.012.patch, HADOOP-13065.013.patch, HDFS-10175.000.patch, > HDFS-10175.001.patch, HDFS-10175.002.patch, HDFS-10175.003.patch, > HDFS-10175.004.patch, HDFS-10175.005.patch, HDFS-10175.006.patch, > TestStatisticsOverhead.java > > > Currently FileSystem.Statistics exposes the following statistics: > BytesRead > BytesWritten > ReadOps > LargeReadOps > WriteOps > These are in-turn exposed as job counters by MapReduce and other frameworks. > There is logic within DfsClient to map operations to these counters that can > be confusing, for instance, mkdirs counts as a writeOp. > Proposed enhancement: > Add a statistic for each DfsClient operation including create, append, > createSymlink, delete, exists, mkdirs, rename and expose them as new > properties on the Statistics object. The operation-specific counters can be > used for analyzing the load imposed by a particular job on HDFS. > For example, we can use them to identify jobs that end up creating a large > number of files. > Once this information is available in the Statistics object, the app > frameworks like MapReduce can expose them as additional counters to be > aggregated and recorded as part of job summary. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13487) Hadoop KMS should load old delegation tokens from Zookeeper on startup
[ https://issues.apache.org/jira/browse/HADOOP-13487?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433828#comment-15433828 ] Alex Ivanov commented on HADOOP-13487: -- Thank you for clarifying, [~xiaochen], and for submitting a patch so promptly! > Hadoop KMS should load old delegation tokens from Zookeeper on startup > -- > > Key: HADOOP-13487 > URL: https://issues.apache.org/jira/browse/HADOOP-13487 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.6.0 >Reporter: Alex Ivanov >Assignee: Xiao Chen > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13487.01.patch, HADOOP-13487.02.patch, > HADOOP-13487.03.patch, HADOOP-13487.04.patch, HADOOP-13487.05.patch > > > Configuration: > CDH 5.5.1 (Hadoop 2.6+) > KMS configured to store delegation tokens in Zookeeper > DEBUG logging enabled in /etc/hadoop-kms/conf/kms-log4j.properties > Findings: > It seems to me delegation tokens never get cleaned up from Zookeeper past > their renewal date. I can see in the logs that the removal thread is started > with the expected interval: > {code} > 2016-08-11 08:15:24,511 INFO AbstractDelegationTokenSecretManager - Starting > expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) > {code} > However, I don't see any delegation token removals, indicated by the > following log message: > org.apache.hadoop.security.token.delegation.ZKDelegationTokenSecretManager > --> removeStoredToken(TokenIdent ident), line 769 [CDH] > {code} > if (LOG.isDebugEnabled()) { > LOG.debug("Removing ZKDTSMDelegationToken_" > + ident.getSequenceNumber()); > } > {code} > Meanwhile, I see a lot of expired delegation tokens in Zookeeper that don't > get cleaned up. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13451) S3Guard: Implement access policy using metadata store as source of truth.
[ https://issues.apache.org/jira/browse/HADOOP-13451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu reassigned HADOOP-13451: -- Assignee: Lei (Eddy) Xu > S3Guard: Implement access policy using metadata store as source of truth. > - > > Key: HADOOP-13451 > URL: https://issues.apache.org/jira/browse/HADOOP-13451 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Lei (Eddy) Xu > > Implement an S3A access policy that provides strong consistency and improved > performance by using the metadata store as the source of truth for metadata > operations. In many cases, this will allow S3A to short-circuit calls to S3. > Assuming shorter latency for calls to the metadata store compared to S3, we > expect this will improve overall performance. With this policy, a client may > not be capable of reading data loaded into an S3 bucket by external tools > that don't integrate with the metadata store. Users need to be made aware of > this limitation. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13450) S3Guard: Implement access policy providing strong consistency with S3 as source of truth.
[ https://issues.apache.org/jira/browse/HADOOP-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lei (Eddy) Xu updated HADOOP-13450: --- Assignee: (was: Lei (Eddy) Xu) > S3Guard: Implement access policy providing strong consistency with S3 as > source of truth. > - > > Key: HADOOP-13450 > URL: https://issues.apache.org/jira/browse/HADOOP-13450 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth > > Implement an S3A access policy that provides strong consistency by > cross-checking with the consistent metadata store, but still using S3 as the > the source of truth. This access policy will be well suited to users who > want an improved consistency guarantee but also want the freedom to load data > into the bucket using external tools that don't integrate with the metadata > store. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf
[ https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433735#comment-15433735 ] Zhe Zhang commented on HADOOP-12668: I cherry-picked this to branch-2.7 in support of HADOOP-12765 > Support excluding weak Ciphers in HttpServer2 through ssl-server.conf > -- > > Key: HADOOP-12668 > URL: https://issues.apache.org/jira/browse/HADOOP-12668 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.1 >Reporter: Vijay Singh >Assignee: Vijay Singh >Priority: Critical > Labels: common, ha, hadoop, hdfs, security > Fix For: 2.8.0, 2.7.4 > > Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, > Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, > Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log > > Original Estimate: 24h > Remaining Estimate: 24h > > Currently Embeded jetty Server used across all hadoop services is configured > through ssl-server.xml file from their respective configuration section. > However, the SSL/TLS protocol being used for this jetty servers can be > downgraded to weak cipher suites. This code changes aims to add following > functionality: > 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to > spawn jetty servers with ability to exclude weak cipher suites. I propose we > make this though ssl-server.xml and hence each service can choose to disable > specific ciphers. > 2) Modify DFSUtil.java used by HDFS code to supply new parameter > ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the > ciphers supplied through this key. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12668) Support excluding weak Ciphers in HttpServer2 through ssl-server.conf
[ https://issues.apache.org/jira/browse/HADOOP-12668?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12668: --- Fix Version/s: 2.7.4 > Support excluding weak Ciphers in HttpServer2 through ssl-server.conf > -- > > Key: HADOOP-12668 > URL: https://issues.apache.org/jira/browse/HADOOP-12668 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 2.7.1 >Reporter: Vijay Singh >Assignee: Vijay Singh >Priority: Critical > Labels: common, ha, hadoop, hdfs, security > Fix For: 2.8.0, 2.7.4 > > Attachments: Hadoop-12668.006.patch, Hadoop-12668.007.patch, > Hadoop-12668.008.patch, Hadoop-12668.009.patch, Hadoop-12668.010.patch, > Hadoop-12668.011.patch, Hadoop-12668.012.patch, test.log > > Original Estimate: 24h > Remaining Estimate: 24h > > Currently Embeded jetty Server used across all hadoop services is configured > through ssl-server.xml file from their respective configuration section. > However, the SSL/TLS protocol being used for this jetty servers can be > downgraded to weak cipher suites. This code changes aims to add following > functionality: > 1) Add logic in hadoop common (HttpServer2.java and associated interfaces) to > spawn jetty servers with ability to exclude weak cipher suites. I propose we > make this though ssl-server.xml and hence each service can choose to disable > specific ciphers. > 2) Modify DFSUtil.java used by HDFS code to supply new parameter > ssl.server.exclude.cipher.list for hadoop-common code, so it can exclude the > ciphers supplied through this key. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12765: --- Fix Version/s: 2.8.0 > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433680#comment-15433680 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 17s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 46s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 11 new + 43 unchanged - 2 fixed = 54 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 52s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 12s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestLinkMergeSlash | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825130/HADOOP-13055.01.patch | | JIRA Issue | HADOOP-13055 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 7faef2ce3c8f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 143c59e | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10348/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/10348/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10348/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10348/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 >
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12765: --- Target Version/s: 2.7.4 (was: 2.9.0) > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433671#comment-15433671 ] Zhe Zhang commented on HADOOP-12765: Just noticed that the branch-2 patch has already passed Jenkins. +1. I will commit shortly. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException
[ https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433670#comment-15433670 ] Hudson commented on HADOOP-12726: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10332 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10332/]) HADOOP-12726. Unsupported FS operations should throw (cdouglas: rev c37346d0e3f9d39d0aec7a9c5bda3e9772aa969b) * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/sftp/SFTPFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/sink/RollingFileSystemSink.java * (edit) hadoop-common-project/hadoop-common/src/site/markdown/filesystem/filesystem.md * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ftp/FTPFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3native/NativeS3FileSystem.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/ChecksumFs.java > Unsupported FS operations should throw UnsupportedOperationException > > > Key: HADOOP-12726 > URL: https://issues.apache.org/jira/browse/HADOOP-12726 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12726.001.patch, HADOOP-12726.002.patch, > HADOOP-12726.003.patch > > > In the {{FileSystem}} implementation classes, unsupported operations throw > {{new IOException("Not supported")}}, which makes it needlessly difficult to > distinguish an actual error from an unsupported operation. They should > instead throw {{new UnsupportedOperationException()}}. > It's possible that this anti-idiom is used elsewhere in the code base. This > JIRA should include finding and cleaning up those instances as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException
[ https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433644#comment-15433644 ] Daniel Templeton commented on HADOOP-12726: --- Thanks, [~chris.douglas] and [~steve_l]! > Unsupported FS operations should throw UnsupportedOperationException > > > Key: HADOOP-12726 > URL: https://issues.apache.org/jira/browse/HADOOP-12726 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12726.001.patch, HADOOP-12726.002.patch, > HADOOP-12726.003.patch > > > In the {{FileSystem}} implementation classes, unsupported operations throw > {{new IOException("Not supported")}}, which makes it needlessly difficult to > distinguish an actual error from an unsupported operation. They should > instead throw {{new UnsupportedOperationException()}}. > It's possible that this anti-idiom is used elsewhere in the code base. This > JIRA should include finding and cleaning up those instances as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12726) Unsupported FS operations should throw UnsupportedOperationException
[ https://issues.apache.org/jira/browse/HADOOP-12726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Douglas updated HADOOP-12726: --- Resolution: Fixed Hadoop Flags: Incompatible change,Reviewed (was: Incompatible change) Fix Version/s: 3.0.0-alpha1 Target Version/s: (was: ) Status: Resolved (was: Patch Available) Checked with [~ste...@apache.org] offline. I committed this. Thanks Daniel > Unsupported FS operations should throw UnsupportedOperationException > > > Key: HADOOP-12726 > URL: https://issues.apache.org/jira/browse/HADOOP-12726 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.7.1 >Reporter: Daniel Templeton >Assignee: Daniel Templeton > Fix For: 3.0.0-alpha1 > > Attachments: HADOOP-12726.001.patch, HADOOP-12726.002.patch, > HADOOP-12726.003.patch > > > In the {{FileSystem}} implementation classes, unsupported operations throw > {{new IOException("Not supported")}}, which makes it needlessly difficult to > distinguish an actual error from an unsupported operation. They should > instead throw {{new UnsupportedOperationException()}}. > It's possible that this anti-idiom is used elsewhere in the code base. This > JIRA should include finding and cleaning up those instances as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.
[ https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433626#comment-15433626 ] Aaron Fabbri commented on HADOOP-13448: --- Thanks for the patch! Looks pretty good, especially since we will evolve it as needed. - I really like having initialize() / Closable as well. - put() / putNew() this is fine to eliminate putNew().. I wasn't convinced the distinction was necessary. We'll need to go through the same exercise of defining precise semantics, and writing test cases, that we've done for the FS Contract test stuff. We can always add putNew() if the distinction is useful. On PathMetadata: - Could also use a private boolean isDirectory instead introducing separate private class to return true/false. Besides saving a couple of lines of code, it may make the next suggestion clearer in the code... (Not a biggie though) On listChildren() return type: - I think we need more than a List if we are to express fully-cached directories, so callers--e.g. listStatus()--know if they may avoid a round trip to blobstore. How about an additional type, say, DirectoryListMetadata (I'd called it CachedDirectory). It is essentially a struct of List + extra state. The extra state I want, so far, is just the "fully cached" boolean flag. Please shout if any of that is not clear... I'm a little sleep-deprived today. > S3Guard: Define MetadataStore interface. > > > Key: HADOOP-13448 > URL: https://issues.apache.org/jira/browse/HADOOP-13448 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13448-HADOOP-13345.001.patch > > > Define the common interface for metadata store operations. This is the > interface that any metadata back-end must implement in order to integrate > with S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-13055: --- Attachment: HADOOP-13055.01.patch Updating patch: # Fixing a bug in initializing {{root}} which caused the unit test failures # Enforcing that merge slash and regular links don't co-exist # Add unit test for above > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HADOOP-13055.00.patch, HADOOP-13055.01.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13537) Support external calls in the RPC call queue
[ https://issues.apache.org/jira/browse/HADOOP-13537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-13537: - Attachment: HADOOP-13537.patch > Support external calls in the RPC call queue > > > Key: HADOOP-13537 > URL: https://issues.apache.org/jira/browse/HADOOP-13537 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HADOOP-13537.patch > > > Leveraging HADOOP-13465 will allow non-rpc calls to be added to the call > queue. This is intended to support routing webhdfs calls through the call > queue to provide a unified and protocol-independent QoS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13537) Support external calls in the RPC call queue
Daryn Sharp created HADOOP-13537: Summary: Support external calls in the RPC call queue Key: HADOOP-13537 URL: https://issues.apache.org/jira/browse/HADOOP-13537 Project: Hadoop Common Issue Type: Improvement Components: ipc Reporter: Daryn Sharp Assignee: Daryn Sharp Leveraging HADOOP-13465 will allow non-rpc calls to be added to the call queue. This is intended to support routing webhdfs calls through the call queue to provide a unified and protocol-independent QoS. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13519) Make Path serializable
[ https://issues.apache.org/jira/browse/HADOOP-13519?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433566#comment-15433566 ] Chris Douglas commented on HADOOP-13519: +1 (whitespace looks fine, not sure why Jenkins is complaining). Why a float constant in hex? {noformat} + private static final long serialVersionUID = 0xad00f; {noformat} > Make Path serializable > -- > > Key: HADOOP-13519 > URL: https://issues.apache.org/jira/browse/HADOOP-13519 > Project: Hadoop Common > Issue Type: Improvement > Components: io >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13519-branch-2-001.patch, > HADOOP-13519-branch-2-002.patch > > > If you could make Hadoop Paths serializable, you can use them in Spark > operations without having to convert them to and from URIs. > It's trivial for paths to support this; as well as the OS code we need to add > a check that there's no null URI coming in over the wire, and test to > validate round tripping -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433454#comment-15433454 ] Hadoop QA commented on HADOOP-13055: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 47s{color} | {color:red} root generated 2 new + 709 unchanged - 1 fixed = 711 total (was 710) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 8 new + 43 unchanged - 2 fixed = 51 total (was 45) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 49s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.viewfs.TestViewFileSystemDelegation | | | hadoop.fs.viewfs.TestFcPermissionsLocalFs | | | hadoop.fs.viewfs.TestViewFsURIs | | | hadoop.fs.viewfs.TestViewfsFileStatus | | | hadoop.fs.viewfs.TestViewFsTrash | | | hadoop.fs.viewfs.TestFcCreateMkdirLocalFs | | | hadoop.fs.viewfs.TestFcMainOperationsLocalFs | | | hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem | | | hadoop.fs.viewfs.TestViewFileSystemDelegationTokenSupport | | | hadoop.fs.viewfs.TestViewFileSystemLocalFileSystem | | | hadoop.fs.viewfs.TestViewFsWithAuthorityLocalFs | | | hadoop.fs.viewfs.TestViewFsConfig | | | hadoop.fs.viewfs.TestViewFileSystemWithAuthorityLocalFileSystem | | | hadoop.fs.viewfs.TestChRootedFileSystem | | | hadoop.fs.viewfs.TestViewFsLocalFs | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825116/HADOOP-13055.00.patch | | JIRA Issue | HADOOP-13055 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 035132ca9641 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8aae8d6 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | javac | https://builds.apache.o
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-13055: --- Attachment: HADOOP-13055.00.patch Pretty rough initial patch to test whether the idea breaks any existing unit tests. I'm still working on: # Enforcing {{linkMergeSlash}} is not used together with regular links # An issue on {{ViewFileSystem#getFileStatus}} causing the returned status to have a wrong path (the {{LocatedFileStatus}} that it wraps is correct) # More unit tests > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Zhe Zhang > Attachments: HADOOP-13055.00.patch > > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-13055: --- Status: Patch Available (was: Open) > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Zhe Zhang > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13536) Clean up code
Elliott Clark created HADOOP-13536: -- Summary: Clean up code Key: HADOOP-13536 URL: https://issues.apache.org/jira/browse/HADOOP-13536 Project: Hadoop Common Issue Type: Sub-task Reporter: Elliott Clark Some code comments came in while discussing merging. We should clean all those up and get everything that's known done. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jing Zhao updated HADOOP-13433: --- Assignee: Duo Zhang > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Duo Zhang >Assignee: Duo Zhang > Attachments: HADOOP-13433.patch > > > This is a problem that has troubled us for several years. For our HBase > cluster, sometimes the RS will be stuck due to > {noformat} > 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception > encountered while connecting to the server : > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: The ticket > isn't for us (35) - BAD TGS SERVER NAME)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) > at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) > at org.apache.hadoop.hbase.security.User.call(User.java:607) > at org.apache.hadoop.hbase.security.User.access$700(User.java:51) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321) > at > org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164) > at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004) > at > org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107) > at $Proxy24.replicateLogEntries(Unknown Source) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515) > Caused by: GSSException: No valid credentials provided (Mechanism level: The > ticket isn't for us (35) - BAD TGS SERVER NAME) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180) > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175) > ... 23 more > Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64) > at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185) > at > sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294) > at > sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106) > at > sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594) > ... 26 more > Caused by: KrbException: Identifier doesn't match expected value (906) > at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133) > at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58) > at sun.security.krb5.internal.TGSRep.(TGSRep.java:53) > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46) > ... 31 more > {noformat} > It rarely happens, but if it happens, the regionserver will be stuck and can > never recover. > Recently we added a log after a successful re-login which prints the private > credentials, and finally catched the d
[jira] [Commented] (HADOOP-13052) ChecksumFileSystem mishandles crc file permissions
[ https://issues.apache.org/jira/browse/HADOOP-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433317#comment-15433317 ] Chris Trezzo commented on HADOOP-13052: --- Adding 2.6.5 to the target versions with the intention of backporting this to branch-2.6. Please let me know if you think otherwise. Thanks! > ChecksumFileSystem mishandles crc file permissions > -- > > Key: HADOOP-13052 > URL: https://issues.apache.org/jira/browse/HADOOP-13052 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.7.3 > > Attachments: HADOOP-13052.patch > > > CheckFileSystem does not override permission related calls to apply those > operations to the hidden crc files. Clients may be unable to read the crcs > if the file is created with strict permissions and then relaxed. > The checksum fs is designed to work with or w/o crcs present, so it silently > ignores FNF exceptions. The java file stream apis unfortunately may only > throw FNF, so permission denied becomes FNF resulting in this bug going > silently unnoticed. > (Problem discovered via public localizer. Files are downloaded as > user-readonly and then relaxed to all-read. The crc remains user-readonly) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13052) ChecksumFileSystem mishandles crc file permissions
[ https://issues.apache.org/jira/browse/HADOOP-13052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Trezzo updated HADOOP-13052: -- Target Version/s: 2.6.5 > ChecksumFileSystem mishandles crc file permissions > -- > > Key: HADOOP-13052 > URL: https://issues.apache.org/jira/browse/HADOOP-13052 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.7.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Fix For: 2.7.3 > > Attachments: HADOOP-13052.patch > > > CheckFileSystem does not override permission related calls to apply those > operations to the hidden crc files. Clients may be unable to read the crcs > if the file is created with strict permissions and then relaxed. > The checksum fs is designed to work with or w/o crcs present, so it silently > ignores FNF exceptions. The java file stream apis unfortunately may only > throw FNF, so permission denied becomes FNF resulting in this bug going > silently unnoticed. > (Problem discovered via public localizer. Files are downloaded as > user-readonly and then relaxed to all-read. The crc remains user-readonly) -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12810) FileSystem#listLocatedStatus causes unnecessary RPC calls
[ https://issues.apache.org/jira/browse/HADOOP-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433306#comment-15433306 ] Chris Trezzo commented on HADOOP-12810: --- Adding 2.6.5 to the target versions with the intention of backporting this to branch-2.6. We would also backport the associated MAPREDUCE-6637 for the test fix. Please let me know if you think otherwise. Thanks! > FileSystem#listLocatedStatus causes unnecessary RPC calls > - > > Key: HADOOP-12810 > URL: https://issues.apache.org/jira/browse/HADOOP-12810 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Ryan Blue >Assignee: Ryan Blue > Fix For: 2.7.3 > > Attachments: HADOOP-12810.1.patch > > > {{FileSystem#listLocatedStatus}} lists the files in a directory and then > calls {{getFileBlockLocations(stat.getPath(), ...)}} for each instead of > {{getFileBlockLocations(stat, ...)}}. That function with the path arg just > calls {{getFileStatus}} to get another file status from the path and calls > the file status version, so this ends up calling {{getFileStatus}} > unnecessarily. > This is particularly bad for S3, where {{getFileStatus}} is expensive. > Avoiding the extra call improved input split calculation time for a data set > in S3 by ~20x: from 10 minutes to 25 seconds. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12810) FileSystem#listLocatedStatus causes unnecessary RPC calls
[ https://issues.apache.org/jira/browse/HADOOP-12810?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Trezzo updated HADOOP-12810: -- Target Version/s: 2.6.5 > FileSystem#listLocatedStatus causes unnecessary RPC calls > - > > Key: HADOOP-12810 > URL: https://issues.apache.org/jira/browse/HADOOP-12810 > Project: Hadoop Common > Issue Type: Bug > Components: fs, fs/s3 >Affects Versions: 2.7.2 >Reporter: Ryan Blue >Assignee: Ryan Blue > Fix For: 2.7.3 > > Attachments: HADOOP-12810.1.patch > > > {{FileSystem#listLocatedStatus}} lists the files in a directory and then > calls {{getFileBlockLocations(stat.getPath(), ...)}} for each instead of > {{getFileBlockLocations(stat, ...)}}. That function with the path arg just > calls {{getFileStatus}} to get another file status from the path and calls > the file status version, so this ends up calling {{getFileStatus}} > unnecessarily. > This is particularly bad for S3, where {{getFileStatus}} is expensive. > Avoiding the extra call improved input split calculation time for a data set > in S3 by ~20x: from 10 minutes to 25 seconds. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13448) S3Guard: Define MetadataStore interface.
[ https://issues.apache.org/jira/browse/HADOOP-13448?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433301#comment-15433301 ] Lei (Eddy) Xu commented on HADOOP-13448: Hi, [~cnauroth] The interface looks very nice. Just one small comment. For {{PathMetadata#toString()}}, it'd be nice to have {{isDirectory}} flag in the output. Btw, {{DirectoryPathMetadata/FilePathMetadata}} will be implemented in place later? Thanks. > S3Guard: Define MetadataStore interface. > > > Key: HADOOP-13448 > URL: https://issues.apache.org/jira/browse/HADOOP-13448 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Attachments: HADOOP-13448-HADOOP-13345.001.patch > > > Define the common interface for metadata store operations. This is the > interface that any metadata back-end must implement in order to integrate > with S3Guard. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13344) Add option to exclude Hadoop's SLF4J binding
[ https://issues.apache.org/jira/browse/HADOOP-13344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433256#comment-15433256 ] Thomas Poepping commented on HADOOP-13344: -- We cannot, because we may not have access to the other classpath. The problem here is in applications with their own classpath that consume the hadoop classpath, rather than something like a hadoop jar that wants to use its own SLF4J binding. Does that make sense? > Add option to exclude Hadoop's SLF4J binding > > > Key: HADOOP-13344 > URL: https://issues.apache.org/jira/browse/HADOOP-13344 > Project: Hadoop Common > Issue Type: New Feature > Components: bin, scripts >Affects Versions: 2.8.0, 2.7.2 >Reporter: Thomas Poepping >Assignee: Thomas Poepping > Labels: patch > Attachments: HADOOP-13344.patch > > > If another application that uses the Hadoop classpath brings in its own SLF4J > binding for logging, and that jar is not the exact same as the one brought in > by Hadoop, then there will be a conflict between logging jars between the two > classpaths. This patch introduces an optional setting to remove Hadoop's > SLF4J binding from the classpath, to get rid of this problem. > This patch should be applied to 2.8.0, as bin/ and hadoop-config.sh structure > has been changed in 3.0.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13055) Implement linkMergeSlash for ViewFs
[ https://issues.apache.org/jira/browse/HADOOP-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433234#comment-15433234 ] Zhe Zhang commented on HADOOP-13055: I think we need to make two changes to {{InodeTree}}: # {{root}} of a mount table can either be an {{INodeDir}} or an {{INodeLink}}. So we should make it an {{INode}} and assign its value after checking the configurations (at the end of the for loop in the constructor). # Enforce that when {{linkMergeSlash}} is configured, no other links can be configured for that mount table I'm writing a patch to implement the above. Any thoughts are very welcome. > Implement linkMergeSlash for ViewFs > --- > > Key: HADOOP-13055 > URL: https://issues.apache.org/jira/browse/HADOOP-13055 > Project: Hadoop Common > Issue Type: New Feature > Components: fs, viewfs >Reporter: Zhe Zhang >Assignee: Zhe Zhang > > In a multi-cluster environment it is sometimes useful to operate on the root > / slash directory of an HDFS cluster. E.g., list all top level directories. > Quoting the comment in {{ViewFs}}: > {code} > * A special case of the merge mount is where mount table's root is merged > * with the root (slash) of another file system: > * > * fs.viewfs.mounttable.default.linkMergeSlash=hdfs://nn99/ > * > * In this cases the root of the mount table is merged with the root of > *hdfs://nn99/ > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433177#comment-15433177 ] Wei-Chiu Chuang commented on HADOOP-12765: -- Thanks [~zhz] for the review. I've filed HADOOP-13535 to fix the potential bug. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433166#comment-15433166 ] Hadoop QA commented on HADOOP-13498: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 3s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 24s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 4s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825087/HADOOP-13498-HADOOP-12756.004.patch | | JIRA Issue | HADOOP-13498 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux ab828b3c23d5 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 787750d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10346/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10346/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HA
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433134#comment-15433134 ] Zhe Zhang commented on HADOOP-12765: Thanks [~jojochuang]. Branch-2 patch LGTM. +1 pending Jenkins. The conflict is caused by HADOOP-10588. It's only in branch-2, not trunk. I'll file a JIRA to address. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13498: --- Attachment: HADOOP-13498-HADOOP-12756.004.patch > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, > HADOOP-13498-HADOOP-12756.004.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13498: --- Attachment: (was: HADOOP-13498-HADOOP-12756.004.patch) > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13396) Allow pluggable audit loggers in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433121#comment-15433121 ] Hadoop QA commented on HADOOP-13396: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} hadoop-common-project/hadoop-kms: The patch generated 0 new + 21 unchanged - 4 fixed = 21 total (was 25) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 7s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 27m 19s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825080/HADOOP-13396.09.patch | | JIRA Issue | HADOOP-13396 | | Optional Tests | asflicense mvnsite unit xml compile javac javadoc mvninstall findbugs checkstyle | | uname | Linux a9cd18342c16 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8aae8d6 | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10345/testReport/ | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10345/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Allow pluggable audit loggers in KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen >
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhe Zhang updated HADOOP-12765: --- Fix Version/s: 3.0.0-alpha2 > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Fix For: 3.0.0-alpha2 > > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13396) Allow pluggable audit loggers in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13396?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13396: --- Attachment: HADOOP-13396.09.patch Thanks Andrew for the continued reviews. Patch 9 addressed all comments, and updated the logger loading/initialization to fail on any failures. IMO ideally we should {{ExitUtils#terminate}} it, but not sure how to unit test that though. Throwing a RTE is more consistent with current kms code... Also, for the grammar issue, I meant to say 'if you modify log format, downstream people will haunt you'. But updated with your suggestion. :) > Allow pluggable audit loggers in KMS > > > Key: HADOOP-13396 > URL: https://issues.apache.org/jira/browse/HADOOP-13396 > Project: Hadoop Common > Issue Type: New Feature > Components: kms >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13396.01.patch, HADOOP-13396.02.patch, > HADOOP-13396.03.patch, HADOOP-13396.04.patch, HADOOP-13396.05.patch, > HADOOP-13396.06.patch, HADOOP-13396.07.patch, HADOOP-13396.08.patch, > HADOOP-13396.09.patch > > > Currently, KMS audit log is using log4j, to write a text format log. > We should refactor this, so that people can easily add new format audit logs. > The current text format log should be the default, and all of its behavior > should remain compatible. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md
[ https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15433004#comment-15433004 ] Hudson commented on HADOOP-13497: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10330 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10330/]) HADOOP-13497. fix wrong command in CredentialProviderAPI.md. Contributed (iwasakims: rev 8aae8d6bf03ade0607547ed461dc99a336a7e9d4) * (edit) hadoop-common-project/hadoop-common/src/site/markdown/CredentialProviderAPI.md > fix wrong command in CredentialProviderAPI.md > - > > Key: HADOOP-13497 > URL: https://issues.apache.org/jira/browse/HADOOP-13497 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu >Priority: Trivial > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13497.001.patch > > > In CredentialProviderAPI.md line 122 > {quote} > Example: `hadoop credential create ssl.server.keystore.password > jceks://file/tmp/test.jceks` > {quote} > should be > {quote} > Example: `hadoop credential create ssl.server.keystore.password -provider > jceks://file/tmp/test.jceks` > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md
[ https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki resolved HADOOP-13497. --- Resolution: Fixed Fix Version/s: 3.0.0-alpha2 2.8.0 Committed. Thanks for the contribution, [~yuanbo]. > fix wrong command in CredentialProviderAPI.md > - > > Key: HADOOP-13497 > URL: https://issues.apache.org/jira/browse/HADOOP-13497 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu >Priority: Trivial > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HADOOP-13497.001.patch > > > In CredentialProviderAPI.md line 122 > {quote} > Example: `hadoop credential create ssl.server.keystore.password > jceks://file/tmp/test.jceks` > {quote} > should be > {quote} > Example: `hadoop credential create ssl.server.keystore.password -provider > jceks://file/tmp/test.jceks` > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-12765: - Target Version/s: 2.9.0 > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13535) Add jetty6 acceptor startup issue workaround to branch-2
Wei-Chiu Chuang created HADOOP-13535: Summary: Add jetty6 acceptor startup issue workaround to branch-2 Key: HADOOP-13535 URL: https://issues.apache.org/jira/browse/HADOOP-13535 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.9.0 Reporter: Wei-Chiu Chuang After HADOOP-12765 is committed to branch-2, the handling of SSL connection by HttpServer2 may suffer the same Jetty bug described in HADOOP-10588. We should consider adding the same workaround for SSL connection. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13497) fix wrong command in CredentialProviderAPI.md
[ https://issues.apache.org/jira/browse/HADOOP-13497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432970#comment-15432970 ] Masatake Iwasaki commented on HADOOP-13497: --- +1 > fix wrong command in CredentialProviderAPI.md > - > > Key: HADOOP-13497 > URL: https://issues.apache.org/jira/browse/HADOOP-13497 > Project: Hadoop Common > Issue Type: Bug > Components: documentation >Reporter: Yuanbo Liu >Assignee: Yuanbo Liu >Priority: Trivial > Attachments: HADOOP-13497.001.patch > > > In CredentialProviderAPI.md line 122 > {quote} > Example: `hadoop credential create ssl.server.keystore.password > jceks://file/tmp/test.jceks` > {quote} > should be > {quote} > Example: `hadoop credential create ssl.server.keystore.password -provider > jceks://file/tmp/test.jceks` > {quote} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12765) HttpServer2 should switch to using the non-blocking SslSelectChannelConnector to prevent performance degradation when handling SSL connections
[ https://issues.apache.org/jira/browse/HADOOP-12765?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432967#comment-15432967 ] Wei-Chiu Chuang commented on HADOOP-12765: -- Ok. Thanks. Would you like to review my branch-2 patch? Let's commit this to branch-2 and then file a new jira to fix the potential issue. > HttpServer2 should switch to using the non-blocking SslSelectChannelConnector > to prevent performance degradation when handling SSL connections > -- > > Key: HADOOP-12765 > URL: https://issues.apache.org/jira/browse/HADOOP-12765 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2, 2.6.3 >Reporter: Min Shen >Assignee: Min Shen > Attachments: HADOOP-12765-branch-2.patch, HADOOP-12765.001.patch, > HADOOP-12765.001.patch, HADOOP-12765.002.patch, HADOOP-12765.003.patch, > HADOOP-12765.004.patch, HADOOP-12765.005.patch, blocking_1.png, > blocking_2.png, unblocking.png > > > The current implementation uses the blocking SslSocketConnector which takes > the default maxIdleTime as 200 seconds. We noticed in our cluster that when > users use a custom client that accesses the WebHDFS REST APIs through https, > it could block all the 250 handler threads in NN jetty server, causing severe > performance degradation for accessing WebHDFS and NN web UI. Attached > screenshots (blocking_1.png and blocking_2.png) illustrate that when using > SslSocketConnector, the jetty handler threads are not released until the 200 > seconds maxIdleTime has passed. With sufficient number of SSL connections, > this issue could render NN HttpServer to become entirely irresponsive. > We propose to use the non-blocking SslSelectChannelConnector as a fix. We > have deployed the attached patch within our cluster, and have seen > significant improvement. The attached screenshot (unblocking.png) further > illustrates the behavior of NN jetty server after switching to using > SslSelectChannelConnector. > The patch further disables SSLv3 protocol on server side to preserve the > spirit of HADOOP-11260. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432925#comment-15432925 ] Hadoop QA commented on HADOOP-13498: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 17s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 14s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 23s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s{color} | {color:green} HADOOP-12756 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 9s{color} | {color:orange} hadoop-tools/hadoop-aliyun: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 12m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825059/HADOOP-13498-HADOOP-12756.004.patch | | JIRA Issue | HADOOP-13498 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 653b4f1036af 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HADOOP-12756 / 787750d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10344/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aliyun.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10344/testReport/ | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10344/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task
[jira] [Commented] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432923#comment-15432923 ] Hudson commented on HADOOP-13446: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10328 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10328/]) HADOOP-13446. Support running isolated unit tests separate from AWS (cnauroth: rev 6f9c346e577325ec2059d83d5636b5ff7fa6cdce) * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContextCreateMkdir.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/TestS3NContractSeek.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADeleteManyFiles.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlocksize.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionFastOutputStream.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ATemporaryCredentials.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractSeek.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3ADirectoryPerformance.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractMkdir.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractGetFileStatus.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractMkdir.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/TestS3NContractCreate.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AConfiguration.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/TestS3AInputStreamPerformance.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractDelete.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryption.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/TestS3NContractRename.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/yarn/TestS3A.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AConfiguration.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AEncryptionAlgorithmPropagation.java * (edit) hadoop-tools/hadoop-aws/pom.xml * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFailureHandling.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ATemporaryCredentials.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3ABlockingThreadPool.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AEncryptionAlgorithmPropagation.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3AFailureHandling.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/ITestS3AFileContextMainOperations.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/TestS3NContractMkdir.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRename.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractMkdir.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3native/TestInMemoryNativeS3FileSystemContract.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/fileContext/TestS3AFileContextMainOperations.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3n/ITestS3NContractRootDir.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestBlockingThreadPoolExecutorService.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/scale/ITestS3AInputStreamPerformance.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/TestS3AContractRename.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/TestS3ABlockingThreadPool.java * (delete) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/yarn/TestS3AMiniYarnCluster.java * (add) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contr
[jira] [Updated] (HADOOP-13446) Support running isolated unit tests separate from AWS integration tests.
[ https://issues.apache.org/jira/browse/HADOOP-13446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Chris Nauroth updated HADOOP-13446: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.9.0 Status: Resolved (was: Patch Available) I have committed this to trunk and branch-2. [~fabbri] and [~ste...@apache.org], thank you very much for the reviews and testing. It would be great to finish off HADOOP-13447 next. > Support running isolated unit tests separate from AWS integration tests. > > > Key: HADOOP-13446 > URL: https://issues.apache.org/jira/browse/HADOOP-13446 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Chris Nauroth >Assignee: Chris Nauroth > Fix For: 2.9.0 > > Attachments: HADOOP-13446-HADOOP-13345.001.patch, > HADOOP-13446-HADOOP-13345.002.patch, HADOOP-13446-HADOOP-13345.003.patch, > HADOOP-13446-branch-2.006.patch, HADOOP-13446.004.patch, > HADOOP-13446.005.patch, HADOOP-13446.006.patch > > > Currently, the hadoop-aws module only runs Surefire if AWS credentials have > been configured. This implies that all tests must run integrated with the > AWS back-end. It also means that no tests run as part of ASF pre-commit. > This issue proposes for the hadoop-aws module to support running isolated > unit tests without integrating with AWS. This will benefit S3Guard, because > we expect the need for isolated mock-based testing to simulate eventual > consistency behavior. It also benefits hadoop-aws in general by allowing > pre-commit to do something more valuable. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13377) Phase I: Some improvement for incorporating Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13377: --- Summary: Phase I: Some improvement for incorporating Aliyun OSS file system implementation (was: Some improvement for incorporating Aliyun OSS file system implementation) > Phase I: Some improvement for incorporating Aliyun OSS file system > implementation > - > > Key: HADOOP-13377 > URL: https://issues.apache.org/jira/browse/HADOOP-13377 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > > This work is based on > [HADOOP-12756|https://issues.apache.org/jira/browse/HADOOP-12756]. > There are some stability problems which we should pay attention to, include > but not limited to: > 1. OSS will close long-time connection(> 3h) and idle connection(>1minute), > while it is pretty common. > 2. The 'copy' operation is time-consuming, so we could use the existing > Job/Task executing logic, i.e. copy temp result from temp directory to final > directory. > and some hack optimization: > 1. use double buffering and multi-thread when read oss data > 2. data is split in chunk and uploaded in ‘multipart’ way -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13377) Some improvement for incorporating Aliyun OSS file system implementation
[ https://issues.apache.org/jira/browse/HADOOP-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu reassigned HADOOP-13377: -- Assignee: Genmao Yu > Some improvement for incorporating Aliyun OSS file system implementation > > > Key: HADOOP-13377 > URL: https://issues.apache.org/jira/browse/HADOOP-13377 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: 2.8.0, HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > > This work is based on > [HADOOP-12756|https://issues.apache.org/jira/browse/HADOOP-12756]. > There are some stability problems which we should pay attention to, include > but not limited to: > 1. OSS will close long-time connection(> 3h) and idle connection(>1minute), > while it is pretty common. > 2. The 'copy' operation is time-consuming, so we could use the existing > Job/Task executing logic, i.e. copy temp result from temp directory to final > directory. > and some hack optimization: > 1. use double buffering and multi-thread when read oss data > 2. data is split in chunk and uploaded in ‘multipart’ way -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13498) the number of multi-part upload part should not bigger than 10000
[ https://issues.apache.org/jira/browse/HADOOP-13498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Genmao Yu updated HADOOP-13498: --- Attachment: HADOOP-13498-HADOOP-12756.004.patch > the number of multi-part upload part should not bigger than 1 > - > > Key: HADOOP-13498 > URL: https://issues.apache.org/jira/browse/HADOOP-13498 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs >Affects Versions: HADOOP-12756 >Reporter: Genmao Yu >Assignee: Genmao Yu > Fix For: HADOOP-12756 > > Attachments: HADOOP-13498-HADOOP-12756.001.patch, > HADOOP-13498-HADOOP-12756.002.patch, HADOOP-13498-HADOOP-12756.003.patch, > HADOOP-13498-HADOOP-12756.004.patch > > > We should not only throw exception when exceed 1 limit of multi-part > number, but should guarantee to upload any object no matter how big it is. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13465) Design Server.Call to be extensible for unified call queue
[ https://issues.apache.org/jira/browse/HADOOP-13465?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daryn Sharp updated HADOOP-13465: - Issue Type: Improvement (was: Sub-task) Parent: (was: HADOOP-13425) > Design Server.Call to be extensible for unified call queue > -- > > Key: HADOOP-13465 > URL: https://issues.apache.org/jira/browse/HADOOP-13465 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Reporter: Daryn Sharp >Assignee: Daryn Sharp > Attachments: HADOOP-13465.patch > > > The RPC layer supports QoS but other protocols, ex. webhdfs, are completely > unconstrained. Generalizing {{Server.Call}} to be extensible with simple > changes to the handlers will enable unifying the call queue for multiple > protocols. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13061: Status: Patch Available (was: Open) > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, > HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, > HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, > HADOOP-13061.09.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13061) Refactor erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Sasaki updated HADOOP-13061: Status: Open (was: Patch Available) > Refactor erasure coders > --- > > Key: HADOOP-13061 > URL: https://issues.apache.org/jira/browse/HADOOP-13061 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Rui Li >Assignee: Kai Sasaki > Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, > HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch, > HADOOP-13061.06.patch, HADOOP-13061.07.patch, HADOOP-13061.08.patch, > HADOOP-13061.09.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13524) mvn eclipse:eclipse generates .gitignore'able files
[ https://issues.apache.org/jira/browse/HADOOP-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432647#comment-15432647 ] Hudson commented on HADOOP-13524: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10327 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10327/]) HADOOP-13524. mvn eclipse:eclipse generates .gitignore'able files. (jianhe: rev dd76238a3bafd58faa6f38f075505bef1012f150) * (edit) .gitignore > mvn eclipse:eclipse generates .gitignore'able files > --- > > Key: HADOOP-13524 > URL: https://issues.apache.org/jira/browse/HADOOP-13524 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli > Fix For: 2.8.0 > > Attachments: HADOOP-13524.txt > > > {code} > $ git status > On branch trunk > Your branch is up-to-date with 'origin/trunk'. > Untracked files: > (use "git add ..." to include in what will be committed) > hadoop-build-tools/.externalToolBuilders/ > hadoop-build-tools/maven-eclipse.xml > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13524) mvn eclipse:eclipse generates .gitignore'able files
[ https://issues.apache.org/jira/browse/HADOOP-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jian He updated HADOOP-13524: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 2.8.0 Target Version/s: 2.8.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-2, branch-2.8. thanks Vinod ! > mvn eclipse:eclipse generates .gitignore'able files > --- > > Key: HADOOP-13524 > URL: https://issues.apache.org/jira/browse/HADOOP-13524 > Project: Hadoop Common > Issue Type: Bug >Reporter: Vinod Kumar Vavilapalli >Assignee: Vinod Kumar Vavilapalli > Fix For: 2.8.0 > > Attachments: HADOOP-13524.txt > > > {code} > $ git status > On branch trunk > Your branch is up-to-date with 'origin/trunk'. > Untracked files: > (use "git add ..." to include in what will be committed) > hadoop-build-tools/.externalToolBuilders/ > hadoop-build-tools/maven-eclipse.xml > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432609#comment-15432609 ] Hadoop QA commented on HADOOP-13433: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 27 new + 96 unchanged - 2 fixed = 123 total (was 98) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 14s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 38m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12825021/HADOOP-13433.patch | | JIRA Issue | HADOOP-13433 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml findbugs checkstyle | | uname | Linux 49d5518b9c9f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0d5997d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/10343/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10343/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10343/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Proje
[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code
[ https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432575#comment-15432575 ] Hadoop QA commented on HADOOP-13534: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 37s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 81 unchanged - 3 fixed = 81 total (was 84) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12787431/HDFS-9785.001.patch | | JIRA Issue | HADOOP-13534 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 9538a9368d3c 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0d5997d | | Default Java | 1.8.0_101 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/10342/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/10342/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove unused TrashPolicy#getInstance and initiate code > --- > > Key: HADOOP-13534 > URL: https://issues.apache.org/jira/browse/HADOOP-13534 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Zhe Zhang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-9785.001.patch > > > A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate
[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HADOOP-13433: --- Status: Patch Available (was: Open) > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Duo Zhang > Attachments: HADOOP-13433.patch > > > This is a problem that has troubled us for several years. For our HBase > cluster, sometimes the RS will be stuck due to > {noformat} > 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception > encountered while connecting to the server : > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: The ticket > isn't for us (35) - BAD TGS SERVER NAME)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) > at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) > at org.apache.hadoop.hbase.security.User.call(User.java:607) > at org.apache.hadoop.hbase.security.User.access$700(User.java:51) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321) > at > org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164) > at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004) > at > org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107) > at $Proxy24.replicateLogEntries(Unknown Source) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515) > Caused by: GSSException: No valid credentials provided (Mechanism level: The > ticket isn't for us (35) - BAD TGS SERVER NAME) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180) > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175) > ... 23 more > Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64) > at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185) > at > sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294) > at > sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106) > at > sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594) > ... 26 more > Caused by: KrbException: Identifier doesn't match expected value (906) > at sun.security.krb5.internal.KDCRep.init(KDCRep.java:133) > at sun.security.krb5.internal.TGSRep.init(TGSRep.java:58) > at sun.security.krb5.internal.TGSRep.(TGSRep.java:53) > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:46) > ... 31 more > {noformat} > It rarely happens, but if it happens, the regionserver will be stuck and can > never recover. > Recently we added a log after a successful re-login which prints the private > credentials, and finally catched the direct reason. Af
[jira] [Updated] (HADOOP-13433) Race in UGI.reloginFromKeytab
[ https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Duo Zhang updated HADOOP-13433: --- Attachment: HADOOP-13433.patch Let's fix the bug first. I found that the minikdc on trunk could reject a tgs request which uses a service ticket as TGT. The difficulty of changing the relogin steps is that, the subject of UserGroupInformation can be passed from outside and UserGroupInformation is declared as LimitedPrivate, so it is not safe to switch subject after relogin. We need to discuss with the projects declared in LimitedPrivate annotations if we want to do this. I can file another issue to do this. Thanks. > Race in UGI.reloginFromKeytab > - > > Key: HADOOP-13433 > URL: https://issues.apache.org/jira/browse/HADOOP-13433 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Duo Zhang > Attachments: HADOOP-13433.patch > > > This is a problem that has troubled us for several years. For our HBase > cluster, sometimes the RS will be stuck due to > {noformat} > 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception > encountered while connecting to the server : > javax.security.sasl.SaslException: GSS initiate failed [Caused by > GSSException: No valid credentials provided (Mechanism level: The ticket > isn't for us (35) - BAD TGS SERVER NAME)] > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194) > at > org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:396) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781) > at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) > at java.lang.reflect.Method.invoke(Method.java:597) > at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37) > at org.apache.hadoop.hbase.security.User.call(User.java:607) > at org.apache.hadoop.hbase.security.User.access$700(User.java:51) > at > org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461) > at > org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321) > at > org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164) > at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004) > at > org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107) > at $Proxy24.replicateLogEntries(Unknown Source) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466) > at > org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515) > Caused by: GSSException: No valid credentials provided (Mechanism level: The > ticket isn't for us (35) - BAD TGS SERVER NAME) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248) > at > sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180) > at > com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175) > ... 23 more > Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME > at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64) > at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185) > at > sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294) > at > sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106) > at > sun.security.krb5.Credentials.acquireServiceCreds(Credentials.java:557) > at > sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:594) > ... 26 more > Caused by: KrbException: Identifier doesn't match expected value (906) > at sun.security.krb5.internal
[jira] [Commented] (HADOOP-13534) Remove unused TrashPolicy#getInstance and initiate code
[ https://issues.apache.org/jira/browse/HADOOP-13534?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15432515#comment-15432515 ] Akira Ajisaka commented on HADOOP-13534: Moved to Hadoop Common because the code change is in hadoop-common. Hi [~linyiqun], would you file another jira to deprecate the methods before removing them? > Remove unused TrashPolicy#getInstance and initiate code > --- > > Key: HADOOP-13534 > URL: https://issues.apache.org/jira/browse/HADOOP-13534 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Zhe Zhang >Assignee: Yiqun Lin >Priority: Minor > Attachments: HDFS-9785.001.patch > > > A follow-on from HDFS-8831: now the {{getInstance}} and {{initiate}} APIs > with Path is not used anymore. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org