[jira] [Commented] (HADOOP-13160) Suppress checkstyle JavadocPackage check for test source
[ https://issues.apache.org/jira/browse/HADOOP-13160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299556#comment-15299556 ] John Zhuge commented on HADOOP-13160: - [~ste...@apache.org] Could you please commit? > Suppress checkstyle JavadocPackage check for test source > > > Key: HADOOP-13160 > URL: https://issues.apache.org/jira/browse/HADOOP-13160 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.7.2 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13160.001.patch, HADOOP-13160.002.patch, > HADOOP-13160.003.patch > > > Suppress "Missing package-info.java" checkstyle error for test source files. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine
[ https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12579: --- Attachment: HADOOP-12579-v11.patch Fixed some check styles of removing unused imports. > Deprecate and remove WriteableRPCEngine > --- > > Key: HADOOP-12579 > URL: https://issues.apache.org/jira/browse/HADOOP-12579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Haohui Mai >Assignee: Kai Zheng > Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, > HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, > HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, > HADOOP-12579-v8.patch, HADOOP-12579-v9.patch > > > The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC > requests. Without proper checks, it has be shown that it can lead to security > vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, > HADOOP-12577). > The current implementation has migrated from {{WriteableRPCEngine}} to > {{ProtobufRPCEngine}} now. This jira proposes to deprecate > {{WriteableRPCEngine}} in branch-2 and to remove it in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-12579) Deprecate and remove WriteableRPCEngine
[ https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng reassigned HADOOP-12579: -- Assignee: Kai Zheng > Deprecate and remove WriteableRPCEngine > --- > > Key: HADOOP-12579 > URL: https://issues.apache.org/jira/browse/HADOOP-12579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Haohui Mai >Assignee: Kai Zheng > Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, > HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, > HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, HADOOP-12579-v8.patch, > HADOOP-12579-v9.patch > > > The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC > requests. Without proper checks, it has be shown that it can lead to security > vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, > HADOOP-12577). > The current implementation has migrated from {{WriteableRPCEngine}} to > {{ProtobufRPCEngine}} now. This jira proposes to deprecate > {{WriteableRPCEngine}} in branch-2 and to remove it in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299522#comment-15299522 ] Xiao Chen commented on HADOOP-12893: Thanks [~andrew.wang] reviewing. bq. I thought we had all the source distribution items covered already, so these new additions would only apply to the binary distribution. Maybe I comprehend the L&N wrong. I read [this comment from you above|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15283260&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15283260] and thought we want to list dependencies and say they are in the bundle, or in source code, or both. Should we just list anything that has 'bundled?'==Y, and skip others? That will be simple, just wanna make sure before I make the change. bq. There's also a few copies of the GPL and LGPL still in LICENSES. This content was supposed to be pulled from the Licenses tab on the spreadsheet, apparently not? The script only groups from the dependencies tab, and list the license name currently. After that I went to the links, and copied the license and 80-char-wrapped it. I may have forgotten to remove the GPL part from CDDL+GPL... :( I guess one more automation we can do is to paste that into the {{License text}} column and automate. So, LGPL will be gone once we get rid of jdiff. GPL should be gone since they're all CDDL+GPL w/ CPE. We should be good using CDDL. > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299521#comment-15299521 ] Hadoop QA commented on HADOOP-11820: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 00s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:blue}0{color} | {color:blue} machack {color} | {color:blue} 0m 01s {color} | {color:blue} Applied YARN-5121 so that OS X works {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 10s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 21s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 2m 19s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 48m 56s {color} | {color:red} hadoop-yarn in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 42s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806058/YARN-5132-v1.patch | | JIRA Issue | HADOOP-11820 | | Optional Tests | compile javac mvninstall unit | | uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 | | Build tool | maven | | Personality | /Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-bde9590/precommit/personality/hadoop.sh | | git revision | trunk / 28bd63e | | Default Java | 1.8.0_74 | | unit | https://builds.apache.org/job/Precommit-HADOOP-OSX/21/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn.txt | | Test Results | https://builds.apache.org/job/Precommit-HADOOP-OSX/21/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn U: hadoop-yarn-project/hadoop-yarn | | Console output | https://builds.apache.org/job/Precommit-HADOOP-OSX/21/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer > Attachments: YARN-5132-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine
[ https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299516#comment-15299516 ] Hadoop QA commented on HADOOP-12579: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 12 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 35s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 12s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 4s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 37s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 9s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 40s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 38s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 31s {color} | {color:red} root: The patch generated 7 new + 846 unchanged - 71 fixed = 853 total (was 917) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 14s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 27s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 48s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 47s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 56s {color} | {color:green} hadoop-mapreduce-client-hs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 128m 51s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | | | hadoop.hdfs.TestCrcCorruption | | | hadoop.hdfs.server.datanode.TestDataNodeLifeline | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806052/HADOOP-12579-v10.patch | | JIRA Issue | HADOOP-12579 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle cc | | uname | Linux 3de0d642cef2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ae353ea | | Default Java | 1.8.0
[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299480#comment-15299480 ] Hadoop QA commented on HADOOP-13197: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 17s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 27s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 26s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s {color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 38 unchanged - 3 fixed = 38 total (was 41) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 30s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 52s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 36m 33s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806063/HADOOP-13197.01.patch | | JIRA Issue | HADOOP-13197 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 39f8885effd6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 28bd63e | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9578/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9578/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch > > > DecayRpcScheduler currently exposes decayed
[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13197: Attachment: HADOOP-13197.01.patch Attach patch v01 to address the check style issues. > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13197.00.patch, HADOOP-13197.01.patch > > > DecayRpcScheduler currently exposes decayed call count over the time. It will > be useful to expose the non-decayed raw count for monitoring applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13137) TraceAdmin should support Kerberized cluster
[ https://issues.apache.org/jira/browse/HADOOP-13137?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-13137: - Attachment: HADOOP-13137.004.patch v04: fixed checkstyle and javac warnings. > TraceAdmin should support Kerberized cluster > > > Key: HADOOP-13137 > URL: https://issues.apache.org/jira/browse/HADOOP-13137 > Project: Hadoop Common > Issue Type: Bug > Components: tracing >Affects Versions: 2.6.0, 3.0.0-alpha1 > Environment: CDH5.5.1 cluster with Kerberos >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Labels: Kerberos > Attachments: HADOOP-13137.001.patch, HADOOP-13137.002.patch, > HADOOP-13137.003.patch, HADOOP-13137.004.patch > > > When I run {{hadoop trace}} command for a Kerberized NameNode, it failed with > the following error: > [hdfs@weichiu-encryption-1 root]$ hadoop trace -list -host > weichiu-encryption-1.vpc.cloudera.com:802216/05/12 00:02:13 WARN ipc.Client: > Exception encountered while connecting to the server : > java.lang.IllegalArgumentException: Failed to specify server's Kerberos > principal name > 16/05/12 00:02:13 WARN security.UserGroupInformation: > PriviledgedActionException as:h...@vpc.cloudera.com (auth:KERBEROS) > cause:java.io.IOException: java.lang.IllegalArgumentException: Failed to > specify server's Kerberos principal name > Exception in thread "main" java.io.IOException: Failed on local exception: > java.io.IOException: java.lang.IllegalArgumentException: Failed to specify > server's Kerberos principal name; Host Details : local host is: > "weichiu-encryption-1.vpc.cloudera.com/172.26.8.185"; destination host is: > "weichiu-encryption-1.vpc.cloudera.com":8022; > at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:772) > at org.apache.hadoop.ipc.Client.call(Client.java:1470) > at org.apache.hadoop.ipc.Client.call(Client.java:1403) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230) > at com.sun.proxy.$Proxy11.listSpanReceivers(Unknown Source) > at > org.apache.hadoop.tracing.TraceAdminProtocolTranslatorPB.listSpanReceivers(TraceAdminProtocolTranslatorPB.java:58) > at > org.apache.hadoop.tracing.TraceAdmin.listSpanReceivers(TraceAdmin.java:68) > at org.apache.hadoop.tracing.TraceAdmin.run(TraceAdmin.java:177) > at org.apache.hadoop.tracing.TraceAdmin.main(TraceAdmin.java:195) > Caused by: java.io.IOException: java.lang.IllegalArgumentException: Failed to > specify server's Kerberos principal name > at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:682) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at > org.apache.hadoop.ipc.Client$Connection.handleSaslConnectionFailure(Client.java:645) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:733) > at org.apache.hadoop.ipc.Client$Connection.access$2800(Client.java:370) > at org.apache.hadoop.ipc.Client.getConnection(Client.java:1519) > at org.apache.hadoop.ipc.Client.call(Client.java:1442) > ... 7 more > Caused by: java.lang.IllegalArgumentException: Failed to specify server's > Kerberos principal name > at > org.apache.hadoop.security.SaslRpcClient.getServerPrincipal(SaslRpcClient.java:322) > at > org.apache.hadoop.security.SaslRpcClient.createSaslClient(SaslRpcClient.java:231) > at > org.apache.hadoop.security.SaslRpcClient.selectSaslClient(SaslRpcClient.java:159) > at > org.apache.hadoop.security.SaslRpcClient.saslConnect(SaslRpcClient.java:396) > at > org.apache.hadoop.ipc.Client$Connection.setupSaslConnection(Client.java:555) > at org.apache.hadoop.ipc.Client$Connection.access$1800(Client.java:370) > at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:725) > at org.apache.hadoop.ipc.Client$Connection$2.run(Client.java:721) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:415) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1671) > at > org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:720) > ... 10 more > It is failing because {{TraceAdmin}} does not set up the property > {{CommonConfigurationKeys.HADOOP_SECURITY_SERVICE_USER_NAME_KEY}} > Fixing it may require some restructuring, as the NameNode principal > {{dfs.namenode.kerberos.principal}} is a HDFS property, but TraceAdmin is in > hadoop-common. Or, specify it with a new command {{-principal}}. Any > sug
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: YARN-5132-v1.patch > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer > Attachments: YARN-5132-v1.patch > > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Attachment: (was: YARN-5121.00.patch) > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 32s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 52s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 45s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 59s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 6m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 99m 55s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 25s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 166m 17s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.TestDFSUpgradeFromImage | | | hadoop.hdfs.shortcircuit.TestShortCircuitCache | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805466/YARN-5121.00.patch | | JIRA Issue | HADOOP-11820 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml cc | | uname | Linux c99d95629d82 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 500e946 | | Default Java | 1.8.0_91 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9548/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9548/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9548/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9548/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. ) > aw jira testing, ignore > --- > > Key: HADOOP-11820 >
[jira] [Issue Comment Deleted] (HADOOP-11820) aw jira testing, ignore
[ https://issues.apache.org/jira/browse/HADOOP-11820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Allen Wittenauer updated HADOOP-11820: -- Comment: was deleted (was: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 00s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 00s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 57s {color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 3m 57s {color} | {color:red} root in trunk failed. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 58s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 49s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 7m 49s {color} | {color:red} root generated 12 new + 16 unchanged - 10 fixed = 28 total (was 26) {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 7m 49s {color} | {color:red} root generated 526 new + 172 unchanged - 0 fixed = 698 total (was 172) {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 19s {color} | {color:red} root in the patch failed. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 21s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.TestSymlinkLocalFSFileContext | | | hadoop.fs.TestSymlinkLocalFSFileSystem | | | hadoop.net.unix.TestDomainSocket | | | hadoop.security.ssl.TestReloadingX509TrustManager | | | hadoop.security.TestShellBasedIdMapping | \\ \\ || Subsystem || Report/Notes || | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805466/YARN-5121.00.patch | | JIRA Issue | HADOOP-11820 | | Optional Tests | compile javac mvninstall unit cc | | uname | Darwin Gavins-Mac-mini.local 13.2.0 Darwin Kernel Version 13.2.0: Thu Apr 17 23:03:13 PDT 2014; root:xnu-2422.100.13~1/RELEASE_X86_64 x86_64 | | Build tool | maven | | Personality | /Users/jenkins/jenkins-home/workspace/Precommit-HADOOP-OSX/patchprocess/apache-yetus-21ed107/precommit/personality/hadoop.sh | | git revision | trunk / 500e946 | | Default Java | 1.8.0_74 | | compile | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/artifact/patchprocess/branch-compile-root.txt | | cc | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/artifact/patchprocess/diff-compile-cc-root.txt | | javac | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/artifact/patchprocess/diff-compile-javac-root.txt | | unit | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/testReport/ | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager . U: . | | Console output | https://builds.apache.org/job/Precommit-HADOOP-OSX/14/console | | Powered by | Apache Yetus 0.3.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. ) > aw jira testing, ignore > --- > > Key: HADOOP-11820 > URL: https://issues.apache.org/jira/browse/HADOOP-11820 > Project: Hadoop Common > Issue Type: Task >Affects Versions: 3.0.0-alpha1 >Reporter: Allen Wittenauer > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13202) Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might be changed
[ https://issues.apache.org/jira/browse/HADOOP-13202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] zhengbing li updated HADOOP-13202: -- Description: Current implementation: return (vectorSize + 7) / 8; when vectorSize is 2147483647(the max value of Int), error :"java.lang.NegativeArraySizeException" will report the implementation might be changed return (int)(((long)vectorSize + 7) / 8); was: Current implementation: return (vectorSize + 7) / 8; when vectorSize is 2147483647(the max value of Int), error :"java.lang.NegativeArraySizeException" will report the implementation might be changed return (int)((long)vectorSize + 7) / 8; > Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might > be changed > > > Key: HADOOP-13202 > URL: https://issues.apache.org/jira/browse/HADOOP-13202 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.2 >Reporter: zhengbing li > Original Estimate: 1h > Remaining Estimate: 1h > > Current implementation: > return (vectorSize + 7) / 8; > when vectorSize is 2147483647(the max value of Int), error > :"java.lang.NegativeArraySizeException" will report > the implementation might be changed > return (int)(((long)vectorSize + 7) / 8); -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299415#comment-15299415 ] Hadoop QA commented on HADOOP-13191: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 3 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 29s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 56s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 0s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 18s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 38s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 36s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s {color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 60m 38s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806046/HADOOP-13191.001.patch | | JIRA Issue | HADOOP-13191 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux eb9c16ede999 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / ae353ea | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9574/testReport/ | | modules | C: hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs-client hadoop-tools/hadoop-distcp U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9574/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automaticall
[jira] [Updated] (HADOOP-12579) Deprecate and remove WriteableRPCEngine
[ https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-12579: --- Attachment: HADOOP-12579-v10.patch Rebased the patch. > Deprecate and remove WriteableRPCEngine > --- > > Key: HADOOP-12579 > URL: https://issues.apache.org/jira/browse/HADOOP-12579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Haohui Mai > Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, > HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, HADOOP-12579-v5.patch, > HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, HADOOP-12579-v8.patch, > HADOOP-12579-v9.patch > > > The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC > requests. Without proper checks, it has be shown that it can lead to security > vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, > HADOOP-12577). > The current implementation has migrated from {{WriteableRPCEngine}} to > {{ProtobufRPCEngine}} now. This jira proposes to deprecate > {{WriteableRPCEngine}} in branch-2 and to remove it in trunk. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13202) Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might be changed
zhengbing li created HADOOP-13202: - Summary: Implementation of getNBytes in org.apache.hadoop.util.bloom.BloomFilter might be changed Key: HADOOP-13202 URL: https://issues.apache.org/jira/browse/HADOOP-13202 Project: Hadoop Common Issue Type: Bug Affects Versions: 2.7.2 Reporter: zhengbing li Current implementation: return (vectorSize + 7) / 8; when vectorSize is 2147483647(the max value of Int), error :"java.lang.NegativeArraySizeException" will report the implementation might be changed return (int)((long)vectorSize + 7) / 8; -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12943) Add -w -r options in dfs -test command
[ https://issues.apache.org/jira/browse/HADOOP-12943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299389#comment-15299389 ] Weiwei Yang commented on HADOOP-12943: -- The testDFSShell method already had over 150 lines and some nested blocks before my patch, I am not sure why it is listed as 2 new issues. It's hard to reduce lines of to <150 for this test case, I can fix the nested blocks, but that would modify more than just the issue this patch wants to fix. So probably not for now. The failed test case is tracked by HADOOP-13101, not related to this patch. > Add -w -r options in dfs -test command > -- > > Key: HADOOP-12943 > URL: https://issues.apache.org/jira/browse/HADOOP-12943 > Project: Hadoop Common > Issue Type: Improvement > Components: fs, scripts, tools >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0 > > Attachments: HADOOP-12943.001.patch, HADOOP-12943.002.patch, > HADOOP-12943.003.patch, HADOOP-12943.004.patch > > > Currently the dfs -test command only supports > -d, -e, -f, -s, -z > options. It would be helpful if we add > -w, -r > to verify permission of r/w before actual read or write. This will help > script programming. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299346#comment-15299346 ] John Zhuge edited comment on HADOOP-13191 at 5/25/16 2:44 AM: -- Patch 001: * Update {{listStatus}} doc in filesystem.md * Annotate {{FileSystem#listStatus}} with @Nonnull (from javax.annotation supported by IntelliJ) * Restrict {{FileSystem#listStatus}} throw list to {{IOException}} since {{FileNotFoundException}} is a subclass of {{IOException}} * Add {{AccessControlException}} to {{FileSystem#listStatus}} javadoc * Fix {{RawLocalFileSystem#listStatus}} to return empty list when {{localf.list()}} returns null * Fix {{listStatus}} in subclasses of {{FileSystem}} to return empty list instead of null. Only found them in test classes. * Parse 276 callers of {{FileSystem#listStatus}}: ** If it only handles {{FileNotFoundException}} but not {{IOException}}, fix it. Only found 2 cases. ** If it does not have any catch clause, do nothing. ** If it checks null return and list length == 0, essentially no-op, do nothing. Needs discussion: * I am ok not to add @Nonnull annotations although it is nice to have. * What to expect from {{listStatus(path)}} when the path is an accessible file? was (Author: jzhuge): Patch 001: * Update {{listStatus}} doc in filesystem.md * Annotate {{FileSystem#listStatus}} with @Nonnull * Restrict {{FileSystem#listStatus}} throw list to {{IOException}} only since {{FileNotFoundException}} is a subclass of {{IOException}} * Add {{AccessControlException}} to {{FileSystem#listStatus}} javadoc * Fix {{RawLocalFileSystem#listStatus}} to return empty list when {{localf.list()}} returns null * Fix {{listStatus}} in subclasses of {{FileSystem}} to return empty list instead of null. Only found them in tests. * Parse 276 callers of {{FileSystem#listStatus}}: ** If it only handles {{FileNotFoundException}} but not {{IOException}}, fix it. Only found 2 cases. ** If it does not have any catch clause, do nothing. ** If it check null return and list length == 0, essentially no-op, do nothing. Needs discussion: * I am ok not to add @Nonnull annotations although it is nice to have. * What to expect from {{listStatus(path)}} when the path is an accessible file? > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13191.001.patch > > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should throw IOExceptions upon errors or return empty list. > To enforce the contract that null is an invalid return, update javadoc and > leverage @Nullable/@NotNull/@Nonnull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13191: Status: Patch Available (was: Reopened) > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13191.001.patch > > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should throw IOExceptions upon errors or return empty list. > To enforce the contract that null is an invalid return, update javadoc and > leverage @Nullable/@NotNull/@Nonnull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13191: Attachment: HADOOP-13191.001.patch Patch 001: * Update {{listStatus}} doc in filesystem.md * Annotate {{FileSystem#listStatus}} with @Nonnull * Restrict {{FileSystem#listStatus}} throw list to {{IOException}} only since {{FileNotFoundException}} is a subclass of {{IOException}} * Add {{AccessControlException}} to {{FileSystem#listStatus}} javadoc * Fix {{RawLocalFileSystem#listStatus}} to return empty list when {{localf.list()}} returns null * Fix {{listStatus}} in subclasses of {{FileSystem}} to return empty list instead of null. Only found them in tests. * Parse 276 callers of {{FileSystem#listStatus}}: ** If it only handles {{FileNotFoundException}} but not {{IOException}}, fix it. Only found 2 cases. ** If it does not have any catch clause, do nothing. ** If it check null return and list length == 0, essentially no-op, do nothing. Needs discussion: * I am ok not to add @Nonnull annotations although it is nice to have. * What to expect from {{listStatus(path)}} when the path is an accessible file? > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13191.001.patch > > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should throw IOExceptions upon errors or return empty list. > To enforce the contract that null is an invalid return, update javadoc and > leverage @Nullable/@NotNull/@Nonnull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge reopened HADOOP-13191: - Reopen to submit a patch. > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > Attachments: HADOOP-13191.001.patch > > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should throw IOExceptions upon errors or return empty list. > To enforce the contract that null is an invalid return, update javadoc and > leverage @Nullable/@NotNull/@Nonnull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13191: Description: This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} contract does not indicate {{null}} is a valid return and some callers do not test {{null}} before use: AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: {code} assertEquals("ls on an empty directory not of length 0", 0, fs.listStatus(subfolder).length); {code} ChecksumFileSystem#copyToLocalFile: {code} FileStatus[] srcs = listStatus(src); for (FileStatus srcFile : srcs) { {code} SimpleCopyLIsting#getFileStatus: {code} FileStatus[] fileStatuses = fileSystem.listStatus(path); if (excludeList != null && excludeList.size() > 0) { ArrayList fileStatusList = new ArrayList<>(); for(FileStatus status : fileStatuses) { {code} IMHO, there is no good reason for {{listStatus}} to return {{null}}. It should throw IOExceptions upon errors or return empty list. To enforce the contract that null is an invalid return, update javadoc and leverage @Nullable/@NotNull/@Nonnull annotations. So far, I am only aware of the following functions that can return null: * RawLocalFileSystem#listStatus was: This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} contract does not indicate {{null}} is a valid return and some callers do not test {{null}} before use: AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: {code} assertEquals("ls on an empty directory not of length 0", 0, fs.listStatus(subfolder).length); {code} ChecksumFileSystem#copyToLocalFile: {code} FileStatus[] srcs = listStatus(src); for (FileStatus srcFile : srcs) { {code} SimpleCopyLIsting#getFileStatus: {code} FileStatus[] fileStatuses = fileSystem.listStatus(path); if (excludeList != null && excludeList.size() > 0) { ArrayList fileStatusList = new ArrayList<>(); for(FileStatus status : fileStatuses) { {code} IMHO, there is no good reason for {{listStatus}} to return {{null}}. It should return empty list instead. To enforce the contract that null is an invalid return, update javadoc and consider Intellij IDEA's @Nullable and @NotNull annotations. So far, I am only aware of the following functions that can return null: * RawLocalFileSystem#listStatus > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should throw IOExceptions upon errors or return empty list. > To enforce the contract that null is an invalid return, update javadoc and > leverage @Nullable/@NotNull/@Nonnull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299299#comment-15299299 ] Kai Zheng commented on HADOOP-13010: The failed tests were checked and not related to this change. Most of the check styles are intended and one would be fixed. Would fix it here with other comments if any or in HADOOP-11540. {noformat} CoderUtil.java:107: /**: First sentence should end with a period. {noformat} > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch, HADOOP-13010-v7.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299300#comment-15299300 ] Hudson commented on HADOOP-13198: - SUCCESS: Integrated in Hadoop-trunk-Commit #9852 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/9852/]) HADOOP-13198. Add support for OWASP's dependency-check. Contributed by (wang: rev 09b866fd45664ff977702b58b6338ce209729a97) * pom.xml > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13198: - Resolution: Fixed Fix Version/s: 2.8.0 Status: Resolved (was: Patch Available) Great! I've committed this to trunk, branch-2, branch-2.8. Thanks Mike for finding and fixing this, and Larry for discussion and review. We need to triage the current plugin output to determine what is safe to ignore. Would one of you be interested in taking this one? Then we can put together a wiki page and add it to the release steps. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Fix For: 2.8.0 > > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299278#comment-15299278 ] Andrew Wang commented on HADOOP-12893: -- Missed the jdiff question: bq. jdiff is not bundled. But it is used in source (in hadoop-project/pom.xml). I'm not sure how to proceed with this one... Shall we 1) try remove it completely 2) as-is 3) replace it with something else? We listed jdiff in LICENSE for source distribution (for honesty...) If jdiff is not bundled, then we don't need to talk about it in L&N. Mentioning it in a pom.xml file is okay, it'll be downloaded on-demand then. So I vote option 1). Thanks again for pushing on this Xiao! > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299276#comment-15299276 ] Andrew Wang commented on HADOOP-12893: -- I made the spreadsheet accessible publicly, ping me if you want edit access: https://docs.google.com/spreadsheets/d/1HL2b4PSdQMZDVJmum1GIKrteFr2oainApTLiJTPnfd4/edit?usp=sharing Reviewing the patch, there are a lot of things listed as "in source distribution" that I didn't think were there, e.g. {noformat} For: servlet-api 2.5 jsp-api 2.1 JavaBeans Activation Framework Java Servlet API 3.0.1 in source distribution, and {noformat} I thought we had all the source distribution items covered already, so these new additions would only apply to the binary distribution. There's also a few copies of the GPL and LGPL still in LICENSES. This content was supposed to be pulled from the Licenses tab on the spreadsheet, apparently not? > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-10048) LocalDirAllocator should avoid holding locks while accessing the filesystem
[ https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299261#comment-15299261 ] Junping Du commented on HADOOP-10048: - Thanks [~jlowe] for updating the patch. I am OK with lockless approach to solve the problem. However, in the existing patch (004), it seems cannot guarantee the consistency for Async call of {{getLocalPathForWrite(.., conf, ...)}} with conf in multi-thread case if conf get changed. Consider the case that thread A (conf1) and thread B (conf2) call this method at the same time: thread A hit confChanged() with conf1 first, then thread B hit confChanged() with conf2, afterwards thread A going forward to get path for write that is based on conf2 now - that means thread A could get a path that is not existed in conf1. I think we should keep the consistency between API's return result and parameter, like: we can let confChanged() return a thread local Context object and do the following thing with this local context. What do you think? > LocalDirAllocator should avoid holding locks while accessing the filesystem > --- > > Key: HADOOP-10048 > URL: https://issues.apache.org/jira/browse/HADOOP-10048 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 2.3.0 >Reporter: Jason Lowe >Assignee: Jason Lowe > Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, > HADOOP-10048.patch, HADOOP-10048.trunk.patch > > > As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a > bottleneck for multithreaded setups like the ShuffleHandler. We should > consider moving to a lockless design or minimizing the critical sections to a > very small amount of time that does not involve I/O operations. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299254#comment-15299254 ] Hadoop QA commented on HADOOP-13197: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 31s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 17s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 17s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 22s {color} | {color:red} hadoop-common-project/hadoop-common: The patch generated 2 new + 38 unchanged - 3 fixed = 40 total (was 41) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 57s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 32s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806025/HADOOP-13197.00.patch | | JIRA Issue | HADOOP-13197 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux a5f886c2198d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / edd716e | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9573/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9573/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9573/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299226#comment-15299226 ] Larry McCay commented on HADOOP-13198: -- +1 on the patch and for putting in on the RM checklist and it being part of what is tested. It should actually be evaluated before publishing an rc. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299221#comment-15299221 ] Hadoop QA commented on HADOOP-13198: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 14s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 15s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 41s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 51s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 8m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 34s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 35s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 132m 30s {color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 189m 19s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.yarn.server.resourcemanager.TestClientRMTokens | | | hadoop.yarn.server.resourcemanager.TestAMAuthorization | | | hadoop.mapreduce.tools.TestCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805988/HADOOP-13198.001.patch | | JIRA Issue | HADOOP-13198 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit xml | | uname | Linux 3f047b8b51c2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 15ed080 | | Default Java | 1.8.0_91 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/9569/artifact/patchprocess/patch-unit-root.txt | | unit test logs | https://builds.apache.org/job/PreCommit-HADOOP-Build/9569/artifact/patchprocess/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9569/testReport/ | | modules | C: . U: . | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9569/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWA
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13198: - Affects Version/s: 2.6.4 Target Version/s: 2.8.0 I'm also setting the target version to 2.8.0 for now. I haven't heard much about further 2.6 or 2.7 releases, but can backport if it becomes relevant. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13198: - Component/s: security > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build, security >Affects Versions: 2.6.4 >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299207#comment-15299207 ] Andrew Wang commented on HADOOP-13198: -- The CVE database is public, so publishing the output from this plugin isn't revealing any new info. I think that's fine. Let's defer the pre/post commit discussion. I'm happy as long as someone is running this occasionally and looking at the output. Another option I thought of is adding it to the RM checklist. People voting on the release could also check this as while validating the artifacts. See: https://wiki.apache.org/hadoop/HowToRelease Overall I'm +1 on this, will commit later unless someone raises some objections. Thanks y'all. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13197: Attachment: HADOOP-13197.00.patch Attach a patch that collects non-decayed call metrics for DecayRpcScheduler. > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13197.00.patch > > > DecayRpcScheduler currently exposes decayed call count over the time. It will > be useful to expose the non-decayed raw count for monitoring applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13197) Add non-decayed call metrics for DecayRpcScheduler
[ https://issues.apache.org/jira/browse/HADOOP-13197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HADOOP-13197: Status: Patch Available (was: Open) > Add non-decayed call metrics for DecayRpcScheduler > -- > > Key: HADOOP-13197 > URL: https://issues.apache.org/jira/browse/HADOOP-13197 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc, metrics >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao > Attachments: HADOOP-13197.00.patch > > > DecayRpcScheduler currently exposes decayed call count over the time. It will > be useful to expose the non-decayed raw count for monitoring applications. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13155) Implement TokenRenewer to renew and cancel delegation tokens in KMS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13155: --- Summary: Implement TokenRenewer to renew and cancel delegation tokens in KMS (was: Implement TokenRenewer in KMS and HttpFS) > Implement TokenRenewer to renew and cancel delegation tokens in KMS > --- > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299176#comment-15299176 ] Mike Yoder commented on HADOOP-13198: - Another thing to consider with a precommit hook is that the data that dependency-check uses for CVEs is, quite literally, the CVE database. If something pops up there, the results of dependency-check will change shortly thereafter - potentially blocking innocent submittals because suddenly thinks look worse. To get serious about things, we'd want to somehow lock down the ability to add new dependencies. IIRC Solr does something with jar signing. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299173#comment-15299173 ] Hadoop QA commented on HADOOP-13010: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 6 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 43s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 21s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 35s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 43s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 22s {color} | {color:red} root: The patch generated 22 new + 138 unchanged - 11 fixed = 160 total (was 149) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 36s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 2s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 23s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 2s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 66m 31s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 122m 50s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics | | | hadoop.hdfs.TestErasureCodeBenchmarkThroughput | | | hadoop.hdfs.server.datanode.TestFsDatasetCache | | Timed out junit tests | org.apache.hadoop.hdfs.TestLeaseRecovery2 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805996/HADOOP-13010-v7.patch | | JIRA Issue | HADOOP-13010 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 495d69548d47 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 15ed080 | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/9570/artifact/patchproces
[jira] [Commented] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs
[ https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299156#comment-15299156 ] Hadoop QA commented on HADOOP-13201: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s {color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 29s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 31s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 32s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 50s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 29s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12806009/HADOOP-13201.000.patch | | JIRA Issue | HADOOP-13201 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 8a6c6112300d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / edd716e | | Default Java | 1.8.0_91 | | findbugs | v3.0.0 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/9572/testReport/ | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9572/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Print the directory paths when ViewFs denies the rename operation on internal > dirs > -- > > Key: HADOOP-13201 > URL: https://issues.apache.org/jira/browse/HADOOP-13201 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 2.7.2 >Reporter: Tianyin Xu > Attachments: HADOOP-13201.0
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299152#comment-15299152 ] Larry McCay commented on HADOOP-13198: -- I think we might have to be careful about what is published openly as a result of a precommit or even periodic scans, come to think of it. Precommit might be okay if we are blocking it from getting in. We need to discuss with security@a.o to be sure. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299139#comment-15299139 ] Larry McCay commented on HADOOP-13198: -- I think that precommit would be great. No one should be able to commit a change that introduces a new vulnerable dependency. Would it be possible to make that the criteria? Only block new dependencies that have vulnerabilities? Then periodic runs for existing dependencies could have a bounty for showing progress per release or something like that? Zero vulnerabilities would be too much especially in the beginning. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299122#comment-15299122 ] Mike Yoder commented on HADOOP-13198: - (pre|post)commit integration seems rather excessive to me; hopefully third party libraries change slowly. Occasional runs (monthly? per release?) seem fine to me. > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299117#comment-15299117 ] Andrew Wang commented on HADOOP-13198: -- LGTM. Do we need precommit or postcommit integration, or is the assumption that someone is running this occasionally and triaging the output? > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs
[ https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13201: Status: Patch Available (was: Open) > Print the directory paths when ViewFs denies the rename operation on internal > dirs > -- > > Key: HADOOP-13201 > URL: https://issues.apache.org/jira/browse/HADOOP-13201 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 2.7.2 >Reporter: Tianyin Xu > Attachments: HADOOP-13201.000.patch > > > With ViewFs, the delete and rename operations on internal dirs are denied by > throwing {{AccessControlException}}. > Unlike the {{delete()}} which notify the internal dir path, rename does not. > The attached patch appends the directory path on the logged exception. > {code:title=ViewFs.java|borderStyle=solid} > InodeTree.ResolveResult resSrc = >fsState.resolve(getUriPath(src), false); > if (resSrc.isInternalDir()) { >throw new AccessControlException( > - "Cannot Rename within internal dirs of mount table: it is > readOnly"); > + "Cannot Rename within internal dirs of mount table: it is readOnly" > + + src); > } > > InodeTree.ResolveResult resDst = > fsState.resolve(getUriPath(dst), false); > if (resDst.isInternalDir()) { >throw new AccessControlException( > - "Cannot Rename within internal dirs of mount table: it is > readOnly"); > + "Cannot Rename within internal dirs of mount table: it is readOnly" > + + dst); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs
[ https://issues.apache.org/jira/browse/HADOOP-13201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tianyin Xu updated HADOOP-13201: Attachment: HADOOP-13201.000.patch > Print the directory paths when ViewFs denies the rename operation on internal > dirs > -- > > Key: HADOOP-13201 > URL: https://issues.apache.org/jira/browse/HADOOP-13201 > Project: Hadoop Common > Issue Type: Bug > Components: viewfs >Affects Versions: 2.7.2 >Reporter: Tianyin Xu > Attachments: HADOOP-13201.000.patch > > > With ViewFs, the delete and rename operations on internal dirs are denied by > throwing {{AccessControlException}}. > Unlike the {{delete()}} which notify the internal dir path, rename does not. > The attached patch appends the directory path on the logged exception. > {code:title=ViewFs.java|borderStyle=solid} > InodeTree.ResolveResult resSrc = >fsState.resolve(getUriPath(src), false); > if (resSrc.isInternalDir()) { >throw new AccessControlException( > - "Cannot Rename within internal dirs of mount table: it is > readOnly"); > + "Cannot Rename within internal dirs of mount table: it is readOnly" > + + src); > } > > InodeTree.ResolveResult resDst = > fsState.resolve(getUriPath(dst), false); > if (resDst.isInternalDir()) { >throw new AccessControlException( > - "Cannot Rename within internal dirs of mount table: it is > readOnly"); > + "Cannot Rename within internal dirs of mount table: it is readOnly" > + + dst); > } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13201) Print the directory paths when ViewFs denies the rename operation on internal dirs
Tianyin Xu created HADOOP-13201: --- Summary: Print the directory paths when ViewFs denies the rename operation on internal dirs Key: HADOOP-13201 URL: https://issues.apache.org/jira/browse/HADOOP-13201 Project: Hadoop Common Issue Type: Bug Components: viewfs Affects Versions: 2.7.2 Reporter: Tianyin Xu With ViewFs, the delete and rename operations on internal dirs are denied by throwing {{AccessControlException}}. Unlike the {{delete()}} which notify the internal dir path, rename does not. The attached patch appends the directory path on the logged exception. {code:title=ViewFs.java|borderStyle=solid} InodeTree.ResolveResult resSrc = fsState.resolve(getUriPath(src), false); if (resSrc.isInternalDir()) { throw new AccessControlException( - "Cannot Rename within internal dirs of mount table: it is readOnly"); + "Cannot Rename within internal dirs of mount table: it is readOnly" + + src); } InodeTree.ResolveResult resDst = fsState.resolve(getUriPath(dst), false); if (resDst.isInternalDir()) { throw new AccessControlException( - "Cannot Rename within internal dirs of mount table: it is readOnly"); + "Cannot Rename within internal dirs of mount table: it is readOnly" + + dst); } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299026#comment-15299026 ] Hadoop QA commented on HADOOP-13199: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 12s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805999/HADOOP-13199.001.patch | | JIRA Issue | HADOOP-13199 | | Optional Tests | asflicense mvnsite | | uname | Linux 63d431970494 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 02d4e47 | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/9571/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > Attachments: HADOOP-13199.001.patch > > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299016#comment-15299016 ] Kai Zheng commented on HADOOP-13010: Thanks [~cmccabe] for your time driving this and talking to me. It's very helpful and I'm glad you'd like the latest patch. Just opened HADOOP-13200 to address the follow-on task. I will copy the relevant comments to there for us to resume the discussion later. Note the latest patch fixed the issue you mentioned and could you help trigger the Jenkins if convenient? Thanks again. > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch, HADOOP-13010-v7.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt
[ https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15299017#comment-15299017 ] Xiao Chen commented on HADOOP-12893: Hi [~ozawa], Thanks again for helping out. Were you able to try the patch? I should have mentioned that the current jdiff scope {{provided}} doesn't bundle it into the jars. Making the scope of jdiff to "compile" does make it show up though, so I didn't include that change in the latest patch. Is it okay to have it in our deps, but not bundled? (That is, the as-is option in my above [comment|https://issues.apache.org/jira/browse/HADOOP-12893?focusedCommentId=15295903&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15295903]) > Verify LICENSE.txt and NOTICE.txt > - > > Key: HADOOP-12893 > URL: https://issues.apache.org/jira/browse/HADOOP-12893 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1 >Reporter: Allen Wittenauer >Assignee: Xiao Chen >Priority: Blocker > Attachments: HADOOP-12893.002.patch, HADOOP-12893.003.patch, > HADOOP-12893.004.patch, HADOOP-12893.005.patch, HADOOP-12893.01.patch > > > We have many bundled dependencies in both the source and the binary artifacts > that are not in LICENSE.txt and NOTICE.txt. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13200) Seeking a better approach allowing to customize and configure erasure coders
Kai Zheng created HADOOP-13200: -- Summary: Seeking a better approach allowing to customize and configure erasure coders Key: HADOOP-13200 URL: https://issues.apache.org/jira/browse/HADOOP-13200 Project: Hadoop Common Issue Type: Sub-task Reporter: Kai Zheng Assignee: Kai Zheng This is a follow-on task for HADOOP-13010 as discussed over there. There may be some better approach allowing to customize and configure erasure coders than the current having raw coder factory, as [~cmccabe] suggested. Will copy the relevant comments here to continue the discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13199: Status: Patch Available (was: In Progress) > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > Attachments: HADOOP-13199.001.patch > > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13199: Attachment: HADOOP-13199.001.patch Patch 001: * Add doc for option {{-filters}} * List options in alphabetic order > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > Attachments: HADOOP-13199.001.patch > > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-13010: --- Attachment: HADOOP-13010-v7.patch Updated the patch fixing the failure mentioned above. Also manually checked many related tests. Not sure why the Jenkins not worked for this as before but guess the latest change came across both HADOOP and HDFS sides. > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch, HADOOP-13010-v7.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298984#comment-15298984 ] John Zhuge commented on HADOOP-13199: - No need to add back the doc for option {{-mapredSslConf}} that was remove by HDFS-9640. > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-13199 started by John Zhuge. --- > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298960#comment-15298960 ] John Zhuge commented on HADOOP-13199: - Also add doc for option {{mapredSslConf}}. List options in alphabetic order. > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13199: Release Note: (was: Update distcp doc to reflect -filters option added by HADOOP-1540.) > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13199) Add doc for distcp -filters
[ https://issues.apache.org/jira/browse/HADOOP-13199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] John Zhuge updated HADOOP-13199: Description: Update distcp doc to reflect -filters option added by HADOOP-1540. > Add doc for distcp -filters > --- > > Key: HADOOP-13199 > URL: https://issues.apache.org/jira/browse/HADOOP-13199 > Project: Hadoop Common > Issue Type: Improvement > Components: documentation >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Trivial > Labels: supportability > > Update distcp doc to reflect -filters option added by HADOOP-1540. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13199) Add doc for distcp -filters
John Zhuge created HADOOP-13199: --- Summary: Add doc for distcp -filters Key: HADOOP-13199 URL: https://issues.apache.org/jira/browse/HADOOP-13199 Project: Hadoop Common Issue Type: Improvement Components: documentation Affects Versions: 2.6.0 Reporter: John Zhuge Assignee: John Zhuge Priority: Trivial -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Yoder updated HADOOP-13198: Status: Patch Available (was: Open) > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298911#comment-15298911 ] Mike Yoder commented on HADOOP-13198: - Pinging [~andrew.wang], [~atm], and [~ste...@apache.org] > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Yoder updated HADOOP-13198: Description: OWASP's Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. See https://www.owasp.org/index.php/OWASP_Dependency_Check This is very useful to stay on top of known vulnerabilities in third party jars. Since it's a maven plugin it's pretty easy to drop in. was: OWASP's Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. See https://www.owasp.org/index.php/OWASP_Dependency_Check This is very useful to stay on top of known vulnerabilities in third party jars. Since it's a maven plugin > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin it's pretty easy to drop in. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Yoder updated HADOOP-13198: Attachment: HADOOP-13198.001.patch > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: HADOOP-13198.001.patch, > hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13198) Add support for OWASP's dependency-check
[ https://issues.apache.org/jira/browse/HADOOP-13198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mike Yoder updated HADOOP-13198: Attachment: hadoop-all-dependency-check-report.html > Add support for OWASP's dependency-check > > > Key: HADOOP-13198 > URL: https://issues.apache.org/jira/browse/HADOOP-13198 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Reporter: Mike Yoder >Assignee: Mike Yoder >Priority: Minor > Attachments: hadoop-all-dependency-check-report.html > > > OWASP's Dependency-Check is a utility that identifies project > dependencies and checks if there are any known, publicly disclosed, > vulnerabilities. > See https://www.owasp.org/index.php/OWASP_Dependency_Check > This is very useful to stay on top of known vulnerabilities in third party > jars. Since it's a maven plugin -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-13198) Add support for OWASP's dependency-check
Mike Yoder created HADOOP-13198: --- Summary: Add support for OWASP's dependency-check Key: HADOOP-13198 URL: https://issues.apache.org/jira/browse/HADOOP-13198 Project: Hadoop Common Issue Type: Improvement Components: build Reporter: Mike Yoder Assignee: Mike Yoder Priority: Minor OWASP's Dependency-Check is a utility that identifies project dependencies and checks if there are any known, publicly disclosed, vulnerabilities. See https://www.owasp.org/index.php/OWASP_Dependency_Check This is very useful to stay on top of known vulnerabilities in third party jars. Since it's a maven plugin -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13155) Implement TokenRenewer in KMS and HttpFS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298789#comment-15298789 ] Hadoop QA commented on HADOOP-13155: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 57s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 25s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 50s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 20s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 54s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 54s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 25s {color} | {color:red} root: The patch generated 3 new + 315 unchanged - 6 fixed = 318 total (was 321) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 42s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 48s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 33s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 36s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 7s {color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 32s {color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 62m 25s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 124m 13s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.net.TestDNS | | | hadoop.hdfs.server.blockmanagement.TestPendingInvalidateBlock | | | hadoop.hdfs.web.TestWebHdfsTimeouts | | | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaRecovery | | | hadoop.hdfs.TestAsyncDFSRename | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805935/HADOOP-13155.03.patch | | JIRA Issue | HADOOP-13155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux 0ba26c401938 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56
[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Andrew Wang updated HADOOP-13010: - Affects Version/s: 3.0.0-alpha1 Target Version/s: 3.0.0-alpha1 (was: ) > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.0.0-alpha1 >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13155) Implement TokenRenewer in KMS and HttpFS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-13155: --- Attachment: HADOOP-13155.03.patch Fixing the javac warning > Implement TokenRenewer in KMS and HttpFS > > > Key: HADOOP-13155 > URL: https://issues.apache.org/jira/browse/HADOOP-13155 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen > Attachments: HADOOP-13155.01.patch, HADOOP-13155.02.patch, > HADOOP-13155.03.patch, HADOOP-13155.pre.patch > > > Service DelegationToken (DT) renewal is done in Yarn by > {{org.apache.hadoop.yarn.server.resourcemanager.security.DelegationTokenRenewer}}, > where it calls {{Token#renew}} and uses ServiceLoader to find the renewer > class > ([code|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/token/Token.java#L382]), > and invokes the renew method from it. > We seem to miss the token renewer class in KMS / HttpFSFileSystem, and hence > Yarn defaults to {{TrivialRenewer}} for DT of such kinds, resulting in the > token not being renewed. > As a side note, {{HttpFSFileSystem}} does have a {{renewDelegationToken}} > API, but I don't see it invoked in hadoop code base. KMS does not have any > renew hook. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298600#comment-15298600 ] Colin Patrick McCabe commented on HADOOP-13010: --- {{TestCodecRawCoderMapping}} fails for me: {code} testRSDefaultRawCoder(org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping) Time elapsed: 0.015 sec <<< FAILURE! java.lang.AssertionError: null at org.junit.Assert.fail(Assert.java:86) at org.junit.Assert.assertTrue(Assert.java:41) at org.junit.Assert.assertTrue(Assert.java:52) at org.apache.hadoop.io.erasurecode.TestCodecRawCoderMapping.testRSDefaultRawCoder(TestCodecRawCoderMapping.java:54) {code} > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298587#comment-15298587 ] Colin Patrick McCabe commented on HADOOP-13010: --- It was nice talking to you, [~drankye]. It's too bad that we didn't have more time (it was a busy week because I was going out of town). bq. As I explained as above, \[the configuration-based\] approach might not work in all cases, because: there are more than one codecs to be configured and for each of these codecs there may be more than one coder implementation to be configured, and it's not easy to flatten the two layers into one dimension (here you used algorithm). I think these are really configuration questions, not questions about how the code should be structured. What does the user actually need to configure? If the user just configures a coder implementation, does that fully determine the codec which is being used? If so, we should have only one configuration knob-- coder. If a coder could be used for multiple codecs, then we need to have at least two knobs that the user can configure-- one for codec, and another for coder. Once we know what the configuration knobs are, we probably only need one or two functions to create the objects we need based on a {{Configuration}} object, not a whole mess of factory objects. Anyway, let's talk about refactoring codec configuration and factories in a follow-on JIRA. I think we've made a lot of good progress here and it will helpful to get this patch committed. > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12754) Client.handleSaslConnectionFailure() uses wrong user in exception text
[ https://issues.apache.org/jira/browse/HADOOP-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298380#comment-15298380 ] Steve Loughran commented on HADOOP-12754: - OK, I'm confused as i thought I'd been in the situation where it wasn't making sense. Are we confident that it really is always the case? Or could larry's point be copied, both get printed? > Client.handleSaslConnectionFailure() uses wrong user in exception text > -- > > Key: HADOOP-12754 > URL: https://issues.apache.org/jira/browse/HADOOP-12754 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc, security >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-12754-001.patch > > > {{Client.handleSaslConnectionFailure()}} includes the user in SASL failure > messages, but it calls {{UGI.getLoginUser()}} for its text. If there's an > auth problem in a {{doAs()}} context, this exception is fundamentally > misleading -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12754) Client.handleSaslConnectionFailure() uses wrong user in exception text
[ https://issues.apache.org/jira/browse/HADOOP-12754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298353#comment-15298353 ] Daryn Sharp commented on HADOOP-12754: -- The current message is accidentally accurate as currently implemented, and patch doesn't change anything. {{handleSaslConnectionFailure}} is only called when the real user is the login user. So the current user is the real user is the login user. > Client.handleSaslConnectionFailure() uses wrong user in exception text > -- > > Key: HADOOP-12754 > URL: https://issues.apache.org/jira/browse/HADOOP-12754 > Project: Hadoop Common > Issue Type: Sub-task > Components: ipc, security >Affects Versions: 2.7.2 >Reporter: Steve Loughran >Priority: Minor > Attachments: HADOOP-12754-001.patch > > > {{Client.handleSaslConnectionFailure()}} includes the user in SASL failure > messages, but it calls {{UGI.getLoginUser()}} for its text. If there's an > auth problem in a {{doAs()}} context, this exception is fundamentally > misleading -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15298273#comment-15298273 ] Hadoop QA commented on HADOOP-13188: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 11s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 36s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 15s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 29s {color} | {color:green} branch-2 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s {color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 13s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 11s {color} | {color:red} hadoop-tools/hadoop-aws: The patch generated 1 new + 14 unchanged - 0 fixed = 15 total (was 14) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 15s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 11s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s {color} | {color:green} the patch passed with JDK v1.8.0_91 {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s {color} | {color:green} the patch passed with JDK v1.7.0_101 {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_91. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s {color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 17s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 23m 40s {color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:babe025 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805883/HADOOP-13188-branch-2-001.patch | | JIRA Issue | HADOOP-13188 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b2968ed95eaf 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/p
[jira] [Updated] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13188: Status: Patch Available (was: Open) > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-13188: Attachment: HADOOP-13188-branch-2-001.patch Patch 001 # removed the overidden test case which was skipping the contract test # verified that the base test failed # patched S3AFileSystem to check for dest being a dir # verified that with that change, the test passes Full test run completed against S3 Ireland {code} Tests run: 225, Failures: 0, Errors: 0, Skipped: 5 [INFO] [INFO] BUILD SUCCESS [INFO] [INFO] Total time: 17:24 min [INFO] Finished at: 2016-05-24T15:02:47+01:00 [INFO] Final Memory: 17M/284M [INFO] {code} > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-13188) S3A file-create should throw error rather than overwrite directories
[ https://issues.apache.org/jira/browse/HADOOP-13188?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-13188: --- Assignee: Steve Loughran > S3A file-create should throw error rather than overwrite directories > > > Key: HADOOP-13188 > URL: https://issues.apache.org/jira/browse/HADOOP-13188 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.7.2 >Reporter: Raymie Stata >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-13188-branch-2-001.patch > > > S3A.create(Path,FsPermission,boolean,int,short,long,Progressable) is not > checking to see if it's being asked to overwrite a directory. It could > easily do so, and should throw an error in this case. > There is a test-case for this in AbstractFSContractTestBase, but it's being > skipped because S3A is a blobstore. However, both the Azure and Swift file > systems make this test, and the new S3 one should as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13162) Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs
[ https://issues.apache.org/jira/browse/HADOOP-13162?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297999#comment-15297999 ] Steve Loughran commented on HADOOP-13162: - as usual: what did the hadoop-aws test run say, which endpoint? > Consider reducing number of getFileStatus calls in S3AFileSystem.mkdirs > --- > > Key: HADOOP-13162 > URL: https://issues.apache.org/jira/browse/HADOOP-13162 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Rajesh Balamohan >Priority: Minor > Attachments: HADOOP-13162-branch-2-002.patch, > HADOOP-13162-branch-2-003.patch, HADOOP-13162.001.patch > > > getFileStatus is relatively expensive call and mkdirs invokes it multiple > times depending on how deep the directory structure is. It would be good to > reduce the number of getFileStatus calls in such cases. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13191) FileSystem#listStatus should not return null
[ https://issues.apache.org/jira/browse/HADOOP-13191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297965#comment-15297965 ] Steve Loughran commented on HADOOP-13191: - I'm in favour of this, and returning an empty list. Can you also update filesystem.md to highlight light that while versions of local FS have in the past returned null, it is considered erroneous > FileSystem#listStatus should not return null > > > Key: HADOOP-13191 > URL: https://issues.apache.org/jira/browse/HADOOP-13191 > Project: Hadoop Common > Issue Type: Bug > Components: fs >Affects Versions: 2.6.0 >Reporter: John Zhuge >Assignee: John Zhuge >Priority: Minor > > This came out of discussion in HADOOP-12718. The {{FileSystem#listStatus}} > contract does not indicate {{null}} is a valid return and some callers do not > test {{null}} before use: > AbstractContractGetFileStatusTest#testListStatusEmptyDirectory: > {code} > assertEquals("ls on an empty directory not of length 0", 0, > fs.listStatus(subfolder).length); > {code} > ChecksumFileSystem#copyToLocalFile: > {code} > FileStatus[] srcs = listStatus(src); > for (FileStatus srcFile : srcs) { > {code} > SimpleCopyLIsting#getFileStatus: > {code} > FileStatus[] fileStatuses = fileSystem.listStatus(path); > if (excludeList != null && excludeList.size() > 0) { > ArrayList fileStatusList = new ArrayList<>(); > for(FileStatus status : fileStatuses) { > {code} > IMHO, there is no good reason for {{listStatus}} to return {{null}}. It > should return empty list instead. > To enforce the contract that null is an invalid return, update javadoc and > consider Intellij IDEA's @Nullable and @NotNull annotations. > So far, I am only aware of the following functions that can return null: > * RawLocalFileSystem#listStatus -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13155) Implement TokenRenewer in KMS and HttpFS
[ https://issues.apache.org/jira/browse/HADOOP-13155?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15297874#comment-15297874 ] Hadoop QA commented on HADOOP-13155: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s {color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s {color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s {color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 32s {color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 30s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 33s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 24s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 40s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 47s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 55s {color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 30s {color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s {color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 16s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 34s {color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 6m 34s {color} | {color:red} root generated 1 new + 696 unchanged - 1 fixed = 697 total (was 697) {color} | | {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 23s {color} | {color:red} root: The patch generated 3 new + 315 unchanged - 6 fixed = 318 total (was 321) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 39s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 46s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s {color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s {color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 27s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 35s {color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 44s {color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 33s {color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 55s {color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 60m 56s {color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s {color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 121m 25s {color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSClientRetries | | | hadoop.hdfs.shortcircuit.TestShortCircuitCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:2c91fd8 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12805818/HADOOP-13155.02.patch | | JIRA Issue | HADOOP-13155 | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle xml | | uname | Linux c48de7f763ae 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchproce
[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders
[ https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kai Zheng updated HADOOP-13010: --- Attachment: HADOOP-13010-v6.patch Updated the patch according to above comments. > Refactor raw erasure coders > --- > > Key: HADOOP-13010 > URL: https://issues.apache.org/jira/browse/HADOOP-13010 > Project: Hadoop Common > Issue Type: Sub-task >Reporter: Kai Zheng >Assignee: Kai Zheng > Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch, > HADOOP-13010-v3.patch, HADOOP-13010-v4.patch, HADOOP-13010-v5.patch, > HADOOP-13010-v6.patch > > > This will refactor raw erasure coders according to some comments received so > far. > * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to > rely class inheritance to reuse the codes, instead they can be moved to some > utility. > * Suggested by [~jingzhao] somewhere quite some time ago, better to have a > state holder to keep some checking results for later reuse during an > encode/decode call. > This would not get rid of some inheritance levels as doing so isn't clear yet > for the moment and also incurs big impact. I do wish the end result by this > refactoring will make all the levels more clear and easier to follow. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org