[jira] [Updated] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15262: - Attachment: HADOOP-15262.007.patch > AliyunOSS: rename() to move files in a directory in parallel > > > Key: HADOOP-15262 > URL: https://issues.apache.org/jira/browse/HADOOP-15262 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-15262.001.patch, HADOOP-15262.002.patch, > HADOOP-15262.003.patch, HADOOP-15262.004.patch, HADOOP-15262.005.patch, > HADOOP-15262.006.patch, HADOOP-15262.007.patch > > > Currently, rename() operation renames files in series. This will be slow if a > directory contains many files. So we can improve this by rename files in > parallel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support
[ https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401446#comment-16401446 ] genericqa commented on HADOOP-14667: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 10m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 51m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 3s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 5m 33s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red}170m 43s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 39s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}309m 15s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailure | | | hadoop.hdfs.TestMaintenanceState | | | hadoop.yarn.server.nodemanager.webapp.TestContainerLogsPage | | | hadoop.yarn.server.nodemanager.containermanager.scheduler.TestContainerSchedulerQueuing | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HADOOP-14667 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12885053/HADOOP-14667.05.patch | | Optional Tests | asflicense mvnsite unit compile javac javadoc mvninstall shadedclient xml | | uname | Linux f0f41ce2dc34 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4bf6220 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/14319/artifact/out/patch-unit-root.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14319/testReport/ | | Max. process+thread count | 2996 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common
[jira] [Commented] (HADOOP-15234) Throw meaningful message on null when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401434#comment-16401434 ] Hudson commented on HADOOP-15234: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13847 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13847/]) HADOOP-15234. Throw meaningful message on null when initializing (xiao: rev 21c66614610a3c3c9189832faeb120a2ba8069bb) * (edit) hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSWebApp.java > Throw meaningful message on null when initializing KMSWebApp > > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15234) Throw meaningful message on null when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15234: --- Resolution: Fixed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) Pushed to trunk! Thank you for the work [~zhenyi], and [~shahrs87] for review. > Throw meaningful message on null when initializing KMSWebApp > > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Fix For: 3.2.0 > > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15234) Throw meaningful message on null when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-15234: --- Summary: Throw meaningful message on null when initializing KMSWebApp (was: NPE when initializing KMSWebApp) > Throw meaningful message on null when initializing KMSWebApp > > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401400#comment-16401400 ] SammiChen commented on HADOOP-15262: Hi [~wujinhu], the 006 patch looks overall good. One minor issue is the indent in testRenameDirectoryCopyTaskPartialFailed is still "8". It should be "4". Please also upload a patch for branch-2 besides the current patch from trunk. And fire a new Jira to update the document for this improvement. > AliyunOSS: rename() to move files in a directory in parallel > > > Key: HADOOP-15262 > URL: https://issues.apache.org/jira/browse/HADOOP-15262 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-15262.001.patch, HADOOP-15262.002.patch, > HADOOP-15262.003.patch, HADOOP-15262.004.patch, HADOOP-15262.005.patch, > HADOOP-15262.006.patch > > > Currently, rename() operation renames files in series. This will be slow if a > directory contains many files. So we can improve this by rename files in > parallel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14687) AuthenticatedURL will reuse bad/expired session cookies
[ https://issues.apache.org/jira/browse/HADOOP-14687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401388#comment-16401388 ] Konstantin Shvachko commented on HADOOP-14687: -- Just heads up here. The revert of HADOOP-13119 removed one test {{testAuthenticationWithProxyUser()}} introduced here. You might want to reinstate it somewhere. > AuthenticatedURL will reuse bad/expired session cookies > --- > > Key: HADOOP-14687 > URL: https://issues.apache.org/jira/browse/HADOOP-14687 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.0 >Reporter: Daryn Sharp >Assignee: Daryn Sharp >Priority: Critical > Fix For: 2.9.0, 3.0.0-beta1, 2.8.2 > > Attachments: HADOOP-14687.2.trunk.patch, > HADOOP-14687.branch-2.8.patch, HADOOP-14687.trunk.patch > > > AuthenticatedURL with kerberos was designed to perform spnego, then use a > session cookie to avoid renegotiation overhead. Unfortunately the client > will continue to use a cookie after it expires. Every request elicits a 401, > connection closes (despite keepalive because 401 is an "error"), TGS is > obtained, connection re-opened, re-requests with TGS, repeat cycle. This > places a strain on the kdc and creates lots of time_wait sockets. > > The main problem is unbeknownst to the auth url, the JDK transparently does > spnego. The server issues a new cookie but the auth url doesn't scrape the > cookie from the response because it doesn't know the JDK re-authenticated. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15295) Remove redundant logging related to tags from Configuration
[ https://issues.apache.org/jira/browse/HADOOP-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401368#comment-16401368 ] genericqa commented on HADOOP-15295: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 30s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 80m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HADOOP-15295 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914804/HADOOP-15295.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8a32fe78e56f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 4bf6220 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14321/testReport/ | | Max. process+thread count | 1378 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14321/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Remove redundant logging related to tags from Configuration > --- > > Key:
[jira] [Commented] (HADOOP-15295) Remove redundant logging related to tags from Configuration
[ https://issues.apache.org/jira/browse/HADOOP-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401305#comment-16401305 ] Ajay Kumar commented on HADOOP-15295: - Patch v1 which wraps tag related functions in try catch to remove any unintended impact no Configuration obj. > Remove redundant logging related to tags from Configuration > --- > > Key: HADOOP-15295 > URL: https://issues.apache.org/jira/browse/HADOOP-15295 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15295.000.patch, HADOOP-15295.001.patch > > > Remove redundant logging related to tags from Configuration. > {code} > 2018-03-06 18:55:46,164 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,237 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,249 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,256 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15295) Remove redundant logging related to tags from Configuration
[ https://issues.apache.org/jira/browse/HADOOP-15295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HADOOP-15295: Attachment: HADOOP-15295.001.patch > Remove redundant logging related to tags from Configuration > --- > > Key: HADOOP-15295 > URL: https://issues.apache.org/jira/browse/HADOOP-15295 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HADOOP-15295.000.patch, HADOOP-15295.001.patch > > > Remove redundant logging related to tags from Configuration. > {code} > 2018-03-06 18:55:46,164 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,237 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,249 INFO conf.Configuration: Removed undeclared tags: > 2018-03-06 18:55:46,256 WARN util.NativeCodeLoader: Unable to load > native-hadoop library for your platform... using builtin-java classes where > applicable > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401302#comment-16401302 ] Akira Ajisaka commented on HADOOP-14178: 011 patch: Add missing Whitebox.java > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch, HADOOP-14178.011.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14178: --- Attachment: HADOOP-14178.011.patch > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch, HADOOP-14178.011.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14667) Flexible Visual Studio support
[ https://issues.apache.org/jira/browse/HADOOP-14667?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401234#comment-16401234 ] Íñigo Goiri commented on HADOOP-14667: -- I went ahead and tried [^HADOOP-14667.05.patch] in my machine. Switching VS is always painful but VS2015 gets rid of all the cmd tools, so it was harder than usual. Anyway, I tested with VS2015 and tested with a console with the following: * Target: %comspec% /k ""C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\vcvarsall.bat"" amd64 * Start in: "C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\" (This is similar to the VS2010 cmd.) It took me a while to figure this out but from here, the whole thing built (winutils, hdfs, etc). I also tested VS2010 and also worked. +1 on [^HADOOP-14667.05.patch] > Flexible Visual Studio support > -- > > Key: HADOOP-14667 > URL: https://issues.apache.org/jira/browse/HADOOP-14667 > Project: Hadoop Common > Issue Type: Improvement > Components: build >Affects Versions: 3.0.0-beta1 > Environment: Windows >Reporter: Allen Wittenauer >Assignee: Allen Wittenauer >Priority: Major > Attachments: HADOOP-14667.00.patch, HADOOP-14667.01.patch, > HADOOP-14667.02.patch, HADOOP-14667.03.patch, HADOOP-14667.04.patch, > HADOOP-14667.05.patch > > > Is it time to upgrade the Windows native project files to use something more > modern than Visual Studio 2010? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401149#comment-16401149 ] Ajay Kumar edited comment on HADOOP-15317 at 3/15/18 9:52 PM: -- [~xiaochen], thanks for working on this. Two comments: * Current change in do.while loop introduces a bug when {{if (excludedNodes == null || !excludedNodes.contains(ret))}} is not true even once. In this case we will return the last allocated value of {{ret = innerNode.getLeaf(leaveIndex, node);}}. To mitigate this we can reinitialize ret with null in else clause. {code:java} if (excludedNodes == null || !excludedNodes.contains(ret)) { break; } else { ret = null; LOG.debug("Node {} is excluded, continuing.", ret); }{code} * Shall we add {{numOfDatanodes}} in debug statement at L525. was (Author: ajayydv): [~xiaochen], thanks for working on this. Two comments: * Current change in do.while loop introduces a bug in case when {{if (excludedNodes == null || !excludedNodes.contains(ret))}} is not true even once. In this case we will return the last allocated value of {{ret = innerNode.getLeaf(leaveIndex, node);}}. To mitigate this we can reinitialize ret with null in else clause. {code}if (excludedNodes == null || !excludedNodes.contains(ret)) { break; } else { ret = null; LOG.debug("Node {} is excluded, continuing.", ret); }{code} * Shall we add {{numOfDatanodes}} in debug statement at L525. > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.apache.org/jira/browse/HADOOP-15317 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-15317.01.patch > > > Recently we found a postmortem case where the ANN seems to be in an infinite > loop. From the logs it seems it just went through a rolling restart, and DNs > are getting registered. > Later the NN become unresponsive, and from the stacktrace it's inside a > do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done > in HDFS-10320. > Going through the code and logs I'm not able to come up with any theory > (thought about incorrect locking, or the Node object being modified outside > of NetworkTopology, both seem impossible) why this is happening, but we > should eliminate this loop. > stacktrace: > {noformat} > Stack: > java.util.HashMap.hash(HashMap.java:338) > java.util.HashMap.containsKey(HashMap.java:595) > java.util.HashSet.contains(HashSet.java:203) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115) > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599) > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401149#comment-16401149 ] Ajay Kumar commented on HADOOP-15317: - [~xiaochen], thanks for working on this. Two comments: * Current change in do.while loop introduces a bug in case when {{if (excludedNodes == null || !excludedNodes.contains(ret))}} is not true even once. In this case we will return the last allocated value of {{ret = innerNode.getLeaf(leaveIndex, node);}}. To mitigate this we can reinitialize ret with null in else clause. {code}if (excludedNodes == null || !excludedNodes.contains(ret)) { break; } else { ret = null; LOG.debug("Node {} is excluded, continuing.", ret); }{code} * Shall we add {{numOfDatanodes}} in debug statement at L525. > Improve NetworkTopology chooseRandom's loop > --- > > Key: HADOOP-15317 > URL: https://issues.apache.org/jira/browse/HADOOP-15317 > Project: Hadoop Common > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-15317.01.patch > > > Recently we found a postmortem case where the ANN seems to be in an infinite > loop. From the logs it seems it just went through a rolling restart, and DNs > are getting registered. > Later the NN become unresponsive, and from the stacktrace it's inside a > do-while loop inside {{NetworkTopology#chooseRandom}} - part of what's done > in HDFS-10320. > Going through the code and logs I'm not able to come up with any theory > (thought about incorrect locking, or the Node object being modified outside > of NetworkTopology, both seem impossible) why this is happening, but we > should eliminate this loop. > stacktrace: > {noformat} > Stack: > java.util.HashMap.hash(HashMap.java:338) > java.util.HashMap.containsKey(HashMap.java:595) > java.util.HashSet.contains(HashSet.java:203) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:786) > org.apache.hadoop.net.NetworkTopology.chooseRandom(NetworkTopology.java:732) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseDataNode(BlockPlacementPolicyDefault.java:757) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:692) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseRandom(BlockPlacementPolicyDefault.java:666) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseLocalRack(BlockPlacementPolicyDefault.java:573) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTargetInOrder(BlockPlacementPolicyDefault.java:461) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:368) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:243) > org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicyDefault.chooseTarget(BlockPlacementPolicyDefault.java:115) > org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4AdditionalDatanode(BlockManager.java:1596) > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalDatanode(FSNamesystem.java:3599) > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getAdditionalDatanode(NameNodeRpcServer.java:717) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin
[ https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401114#comment-16401114 ] Bharat Viswanadham commented on HADOOP-14699: - [~xiaochen] got it. I have understood wrongly. I will leave backporting for [~daryn] as he is the author of the patch. > Impersonation errors with UGI after second principal relogin > > > Key: HADOOP-14699 > URL: https://issues.apache.org/jira/browse/HADOOP-14699 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.2, 2.7.3, 2.8.1 >Reporter: Jeff Storck >Priority: Major > > Multiple principals that are logged in using UGI instances that are > instantiated from a UGI class loaded by the same classloader will encounter > problems when the second principal attempts to relogin and perform an action > using a UGI.doAs(). An impersonation will occur and the operation attempted > by the second principal after relogging in will fail. There should not be an > implicit attempt to impersonate the second principal through the first > principal that logged in. > I have created a GitHub project that exhibits the impersonation error with > brief instructions on how to set up for the test and run it: > https://github.com/jtstorck/ugi-test > {noformat}18:44:55.687 [pool-2-thread-2] WARN > h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while > performing task for [ugite...@example.com (auth:KERBEROS)] > org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not > allowed to impersonate ugite...@example.com > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) > at org.apache.hadoop.ipc.Client.call(Client.java:1427) > at org.apache.hadoop.ipc.Client.call(Client.java:1337) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448) > at > hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To
[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin
[ https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401088#comment-16401088 ] Xiao Chen commented on HADOOP-14699: I'm not saying it is - feel free to work on it. But since the fix is HADOOP-9747 why don't we close this and track it there? > Impersonation errors with UGI after second principal relogin > > > Key: HADOOP-14699 > URL: https://issues.apache.org/jira/browse/HADOOP-14699 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.2, 2.7.3, 2.8.1 >Reporter: Jeff Storck >Priority: Major > > Multiple principals that are logged in using UGI instances that are > instantiated from a UGI class loaded by the same classloader will encounter > problems when the second principal attempts to relogin and perform an action > using a UGI.doAs(). An impersonation will occur and the operation attempted > by the second principal after relogging in will fail. There should not be an > implicit attempt to impersonate the second principal through the first > principal that logged in. > I have created a GitHub project that exhibits the impersonation error with > brief instructions on how to set up for the test and run it: > https://github.com/jtstorck/ugi-test > {noformat}18:44:55.687 [pool-2-thread-2] WARN > h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while > performing task for [ugite...@example.com (auth:KERBEROS)] > org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not > allowed to impersonate ugite...@example.com > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) > at org.apache.hadoop.ipc.Client.call(Client.java:1427) > at org.apache.hadoop.ipc.Client.call(Client.java:1337) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448) > at > hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe,
[jira] [Commented] (HADOOP-15062) TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9
[ https://issues.apache.org/jira/browse/HADOOP-15062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401069#comment-16401069 ] Szilard Nemeth commented on HADOOP-15062: - +1 (non-binding) > TestCryptoStreamsWithOpensslAesCtrCryptoCodec fails on Debian 9 > --- > > Key: HADOOP-15062 > URL: https://issues.apache.org/jira/browse/HADOOP-15062 > Project: Hadoop Common > Issue Type: Bug >Reporter: Miklos Szegedi >Assignee: Miklos Szegedi >Priority: Major > Attachments: HADOOP-15062.000.patch > > > {code} > [ERROR] > org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec Time > elapsed: 0.478 s <<< FAILURE! > java.lang.AssertionError: Unable to instantiate codec > org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, is the required version of > OpenSSL installed? > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertNotNull(Assert.java:621) > at > org.apache.hadoop.crypto.TestCryptoStreamsWithOpensslAesCtrCryptoCodec.init(TestCryptoStreamsWithOpensslAesCtrCryptoCodec.java:43) > {code} > This happened due to the following openssl change: > https://github.com/openssl/openssl/commit/ff4b7fafb315df5f8374e9b50c302460e068f188 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin
[ https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401011#comment-16401011 ] Bharat Viswanadham commented on HADOOP-14699: - [~xiaochen] I think HADOOP-9747 is still not backported to branch-2. > Impersonation errors with UGI after second principal relogin > > > Key: HADOOP-14699 > URL: https://issues.apache.org/jira/browse/HADOOP-14699 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.2, 2.7.3, 2.8.1 >Reporter: Jeff Storck >Priority: Major > > Multiple principals that are logged in using UGI instances that are > instantiated from a UGI class loaded by the same classloader will encounter > problems when the second principal attempts to relogin and perform an action > using a UGI.doAs(). An impersonation will occur and the operation attempted > by the second principal after relogging in will fail. There should not be an > implicit attempt to impersonate the second principal through the first > principal that logged in. > I have created a GitHub project that exhibits the impersonation error with > brief instructions on how to set up for the test and run it: > https://github.com/jtstorck/ugi-test > {noformat}18:44:55.687 [pool-2-thread-2] WARN > h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while > performing task for [ugite...@example.com (auth:KERBEROS)] > org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not > allowed to impersonate ugite...@example.com > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) > at org.apache.hadoop.ipc.Client.call(Client.java:1427) > at org.apache.hadoop.ipc.Client.call(Client.java:1337) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448) > at > hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail:
[jira] [Commented] (HADOOP-14699) Impersonation errors with UGI after second principal relogin
[ https://issues.apache.org/jira/browse/HADOOP-14699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401008#comment-16401008 ] Xiao Chen commented on HADOOP-14699: Thanks [~jtstorck] for creating the issue and Daryn the work on HADOOP-9747. HADOOP-9747 is in branch-3.0+ now, and seems to be on working towards branch-2. Can we close this one? > Impersonation errors with UGI after second principal relogin > > > Key: HADOOP-14699 > URL: https://issues.apache.org/jira/browse/HADOOP-14699 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 2.6.2, 2.7.3, 2.8.1 >Reporter: Jeff Storck >Priority: Major > > Multiple principals that are logged in using UGI instances that are > instantiated from a UGI class loaded by the same classloader will encounter > problems when the second principal attempts to relogin and perform an action > using a UGI.doAs(). An impersonation will occur and the operation attempted > by the second principal after relogging in will fail. There should not be an > implicit attempt to impersonate the second principal through the first > principal that logged in. > I have created a GitHub project that exhibits the impersonation error with > brief instructions on how to set up for the test and run it: > https://github.com/jtstorck/ugi-test > {noformat}18:44:55.687 [pool-2-thread-2] WARN > h.u.u.ugirunnable.ugite...@example.com - Unexpected exception while > performing task for [ugite...@example.com (auth:KERBEROS)] > org.apache.hadoop.ipc.RemoteException: User: ugite...@example.com is not > allowed to impersonate ugite...@example.com > at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1481) > at org.apache.hadoop.ipc.Client.call(Client.java:1427) > at org.apache.hadoop.ipc.Client.call(Client.java:1337) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) > at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:787) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:398) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) > at > org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) > at > org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:335) > at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1700) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1436) > at > org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1433) > at > org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) > at > org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1448) > at > hadoop.ugitest.UgiTestMain$UgiRunnable.lambda$run$2(UgiTestMain.java:194) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1807) > at hadoop.ugitest.UgiTestMain$UgiRunnable.run(UgiTestMain.java:194) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) > at > java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745){noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories
[ https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400911#comment-16400911 ] Hudson commented on HADOOP-15209: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13845 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13845/]) HADOOP-15209. DistCp to eliminate needless deletion of files under (stevel: rev 1976e0066e9ae8852715fa69d8aea3769330e933) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractDistCp.java * (add) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/DeletedDirTracker.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java * (edit) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/util/TestDistCpUtils.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/OptionsParser.java * (add) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/TestLocalContractDistCp.java * (edit) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestCopyCommitter.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java * (edit) hadoop-tools/hadoop-distcp/src/test/resources/log4j.properties * (add) hadoop-tools/hadoop-distcp/src/test/resources/contract/localfs.xml * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptionSwitch.java * (edit) hadoop-tools/hadoop-azure-datalake/pom.xml * (add) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/mapred/TestDeletedDirTracker.java * (add) hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/TestAdlContractDistCpLive.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpOptions.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListing.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java * (edit) hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java * (edit) hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyListingFileStatus.java > DistCp to eliminate needless deletion of files under already-deleted > directories > > > Key: HADOOP-15209 > URL: https://issues.apache.org/jira/browse/HADOOP-15209 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, > HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, > HADOOP-15209-006.patch, HADOOP-15209-007.patch > > > DistCP issues a delete(file) request even if is underneath an already deleted > directory. This generates needless load on filesystems/object stores, and, if > the store throttles delete, can dramatically slow down the delete operation. > If the distcp delete operation can build a history of deleted directories, > then it will know when it does not need to issue those deletes. > Care is needed here to make sure that whatever structure is created does not > overload the heap of the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories
[ https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15209: Resolution: Fixed Fix Version/s: 3.1.0 Status: Resolved (was: Patch Available) committed to branch 3.1+ thanks for everyones review and testing...very much one where you need to play with distcp on big diffs to appreciate how much it matters. > DistCp to eliminate needless deletion of files under already-deleted > directories > > > Key: HADOOP-15209 > URL: https://issues.apache.org/jira/browse/HADOOP-15209 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.1.0 > > Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, > HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, > HADOOP-15209-006.patch, HADOOP-15209-007.patch > > > DistCP issues a delete(file) request even if is underneath an already deleted > directory. This generates needless load on filesystems/object stores, and, if > the store throttles delete, can dramatically slow down the delete operation. > If the distcp delete operation can build a history of deleted directories, > then it will know when it does not need to issue those deletes. > Care is needed here to make sure that whatever structure is created does not > overload the heap of the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories
[ https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400841#comment-16400841 ] Aaron Fabbri commented on HADOOP-15209: --- My testing all looked good. +1 from me. > DistCp to eliminate needless deletion of files under already-deleted > directories > > > Key: HADOOP-15209 > URL: https://issues.apache.org/jira/browse/HADOOP-15209 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, > HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, > HADOOP-15209-006.patch, HADOOP-15209-007.patch > > > DistCP issues a delete(file) request even if is underneath an already deleted > directory. This generates needless load on filesystems/object stores, and, if > the store throttles delete, can dramatically slow down the delete operation. > If the distcp delete operation can build a history of deleted directories, > then it will know when it does not need to issue those deletes. > Care is needed here to make sure that whatever structure is created does not > overload the heap of the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400706#comment-16400706 ] Xiao Chen commented on HADOOP-14445: Thanks for the scheduling [~daryn]. The config is not a must-set for clients - as your last proposal, we now duplicate tokens into KMS_D_T and kms-dt, by default. So everything works except the token is double-renewed by RM. I added the config as an optimization, to have a way to not duplicate the token (so KMS_D_T only) if everything has been upgraded post-HADOOP-14445. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400454#comment-16400454 ] Daryn Sharp commented on HADOOP-14445: -- I'll make an effort to review today but schedule is very hectic. My last proposal was centered around _not_ having another conf option so I'm not pleased that one was added. Initially I'm inclined to do something similar to Rushabh's last patch but I need to do a brain reload. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400429#comment-16400429 ] Rushabh S Shah commented on HADOOP-15234: - Thanks [~zhenyi] for the patch. +1 non-binding to v5 of patch. > NPE when initializing KMSWebApp > --- > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400369#comment-16400369 ] genericqa commented on HADOOP-14178: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 267 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 55s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 28m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 1s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-hdfs-project/hadoop-hdfs-native-client hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests hadoop-mapreduce-project/hadoop-mapreduce-client hadoop-mapreduce-project hadoop-client-modules/hadoop-client-minicluster . {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 33m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 50s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 14s{color} | {color:red} hadoop-nfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} hadoop-kms in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs-client in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 38s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-hdfs-httpfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-hdfs-nfs in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 32s{color} | {color:red} hadoop-yarn-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 19s{color} | {color:red} hadoop-yarn-server-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 26s{color} | {color:red} hadoop-yarn-server-nodemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-server-web-proxy in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 20s{color} | {color:red} hadoop-yarn-server-applicationhistoryservice in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 17s{color} | {color:red} hadoop-yarn-server-timelineservice in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 27s{color} | {color:red} hadoop-yarn-server-resourcemanager in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 18s{color} | {color:red} hadoop-yarn-server-tests in the patch failed. {color} | | {color:red}-1{color} | {color:red}
[jira] [Commented] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400322#comment-16400322 ] genericqa commented on HADOOP-15262: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 31s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 17s{color} | {color:green} hadoop-aliyun in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 45m 50s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HADOOP-15262 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914688/HADOOP-15262.006.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 3dc640f9a69d 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5e013d5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14318/testReport/ | | Max. process+thread count | 341 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aliyun U: hadoop-tools/hadoop-aliyun | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14318/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > AliyunOSS: rename() to move files in a directory in parallel > > > Key: HADOOP-15262 >
[jira] [Updated] (HADOOP-15262) AliyunOSS: rename() to move files in a directory in parallel
[ https://issues.apache.org/jira/browse/HADOOP-15262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15262: - Attachment: HADOOP-15262.006.patch > AliyunOSS: rename() to move files in a directory in parallel > > > Key: HADOOP-15262 > URL: https://issues.apache.org/jira/browse/HADOOP-15262 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/oss >Affects Versions: 3.0.0 >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > Attachments: HADOOP-15262.001.patch, HADOOP-15262.002.patch, > HADOOP-15262.003.patch, HADOOP-15262.004.patch, HADOOP-15262.005.patch, > HADOOP-15262.006.patch > > > Currently, rename() operation renames files in series. This will be slow if a > directory contains many files. So we can improve this by rename files in > parallel. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories
[ https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400261#comment-16400261 ] Steve Loughran commented on HADOOP-15209: - hmm. I can see the appeal but, hyphens and equals chars are fairly common in pathnames: YEAR=2013 Let's go with what we have. Aaron, thx for the testing: if you are happy, give me the vote up & I'll do the final work > DistCp to eliminate needless deletion of files under already-deleted > directories > > > Key: HADOOP-15209 > URL: https://issues.apache.org/jira/browse/HADOOP-15209 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, > HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, > HADOOP-15209-006.patch, HADOOP-15209-007.patch > > > DistCP issues a delete(file) request even if is underneath an already deleted > directory. This generates needless load on filesystems/object stores, and, if > the store throttles delete, can dramatically slow down the delete operation. > If the distcp delete operation can build a history of deleted directories, > then it will know when it does not need to issue those deletes. > Care is needed here to make sure that whatever structure is created does not > overload the heap of the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14965) s3a input stream "normal" fadvise mode to be adaptive
[ https://issues.apache.org/jira/browse/HADOOP-14965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400254#comment-16400254 ] ASF GitHub Bot commented on HADOOP-14965: - Github user steveloughran closed the pull request at: https://github.com/apache/hadoop/pull/283 > s3a input stream "normal" fadvise mode to be adaptive > - > > Key: HADOOP-14965 > URL: https://issues.apache.org/jira/browse/HADOOP-14965 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.1 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Fix For: 3.1.0, 3.0.1 > > Attachments: HADOOP-14965-001.patch, HADOOP-14965-002.patch, > HADOOP-14965-003.patch, HADOOP-14965-004.patch > > > HADOOP-14535 added seek optimisation to wasb, but rather than require the > caller to declare sequential vs random, it works out for itself. > # defaults to sequential, lazy seek > # if the caller ever seeks backwards, switches to random IO. > This means that on the use pattern of columnar stores: of go to end of file, > read summary, then go to columns and work forwards, will switch to random IO > after that first seek back (cost: one aborted HTTP connection)/. > Where this should benefit the most is in downstream apps where you are > working with different data sources in the same object store/running of the > same app config, but have different read patterns. I'm seeing exactly this in > some of my spark tests, where it's near impossible to set things up so that > .gz files are read sequentially, but ORC data is read in random IO > I propose the "normal" fadvise => adaptive, sequential==sequential always, > random => random from the outset. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400169#comment-16400169 ] genericqa commented on HADOOP-15234: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 3s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 7s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 45s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HADOOP-15234 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914661/HADOOP-15234.005.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 16cd8abaff96 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5e013d5 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14316/testReport/ | | Max. process+thread count | 302 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14316/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > NPE when initializing KMSWebApp >
[jira] [Commented] (HADOOP-15209) DistCp to eliminate needless deletion of files under already-deleted directories
[ https://issues.apache.org/jira/browse/HADOOP-15209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400153#comment-16400153 ] Ewan Higgs commented on HADOOP-15209: - {code:java} + * We do not rely on parent entries being added immediately before children, + * as sorting may place "/dir12" between "/dir1" and its descendants. + *{code} AFAICT, SequenceFile.Sorter will put these in the correct order (for alphanumerics... if you have (, ), #, - etc in your filename it probably gets wonky). This means you can do the following: {code:java} boolean shouldDelete(CopyListingFileStatus status) { final Path path = status.getPath(); Preconditions.checkArgument(!path.isRoot(), "Root Dir"); final String pathStr = path.toString(); final String pathAsDir = pathStr + Path.SEPARATOR; if (lastDir == null) { if (status.isDirectory()) { lastDir = pathAsDir; } return true; } if (pathStr.startsWith(lastDir) || pathAsDir.equals(lastDir)) { return false; } else { if (status.isDirectory()) { lastDir = pathAsDir; } return true; } }{code} This means you no longer need a cache. If you'd like I can attach a patch with the update that passes all the unit tests. > DistCp to eliminate needless deletion of files under already-deleted > directories > > > Key: HADOOP-15209 > URL: https://issues.apache.org/jira/browse/HADOOP-15209 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/distcp >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15209-001.patch, HADOOP-15209-002.patch, > HADOOP-15209-003.patch, HADOOP-15209-004.patch, HADOOP-15209-005.patch, > HADOOP-15209-006.patch, HADOOP-15209-007.patch > > > DistCP issues a delete(file) request even if is underneath an already deleted > directory. This generates needless load on filesystems/object stores, and, if > the store throttles delete, can dramatically slow down the delete operation. > If the distcp delete operation can build a history of deleted directories, > then it will know when it does not need to issue those deletes. > Care is needed here to make sure that whatever structure is created does not > overload the heap of the process. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-14178: --- Attachment: HADOOP-14178.010.patch > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14178) Move Mockito up to version 2.x
[ https://issues.apache.org/jira/browse/HADOOP-14178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400141#comment-16400141 ] Akira Ajisaka commented on HADOOP-14178: 010 patch: Fixed failed tests. > Move Mockito up to version 2.x > -- > > Key: HADOOP-14178 > URL: https://issues.apache.org/jira/browse/HADOOP-14178 > Project: Hadoop Common > Issue Type: Sub-task > Components: test >Affects Versions: 2.9.0 >Reporter: Steve Loughran >Assignee: Akira Ajisaka >Priority: Major > Attachments: HADOOP-14178.001.patch, HADOOP-14178.002.patch, > HADOOP-14178.003.patch, HADOOP-14178.004.patch, HADOOP-14178.005-wip.patch, > HADOOP-14178.005-wip2.patch, HADOOP-14178.005-wip3.patch, > HADOOP-14178.005-wip4.patch, HADOOP-14178.005-wip5.patch, > HADOOP-14178.005-wip6.patch, HADOOP-14178.005.patch, HADOOP-14178.006.patch, > HADOOP-14178.007.patch, HADOOP-14178.008.patch, HADOOP-14178.009.patch, > HADOOP-14178.010.patch > > > I don't know when Hadoop picked up Mockito, but it has been frozen at 1.8.5 > since the switch to maven in 2011. > Mockito is now at version 2.1, [with lots of Java 8 > support|https://github.com/mockito/mockito/wiki/What%27s-new-in-Mockito-2]. > That' s not just defining actions as closures, but in supporting Optional > types, mocking methods in interfaces, etc. > It's only used for testing, and, *provided there aren't regressions*, cost of > upgrade is low. The good news: test tools usually come with good test > coverage. The bad: mockito does go deep into java bytecodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400083#comment-16400083 ] fang zhenyi commented on HADOOP-15234: -- Thanks [~shahrs87] for your great comment. I have replaced \{{Preconditions.checkArgument(keyProvider != null, errmsg)}} with \{{Preconditions.checkNotNull(keyProvider, errormsg)}} in \{{HADOOP-15234.005.patch}}. > NPE when initializing KMSWebApp > --- > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HADOOP-15234: - Status: Patch Available (was: In Progress) > NPE when initializing KMSWebApp > --- > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HADOOP-15234: - Attachment: HADOOP-15234.005.patch > NPE when initializing KMSWebApp > --- > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15234) NPE when initializing KMSWebApp
[ https://issues.apache.org/jira/browse/HADOOP-15234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] fang zhenyi updated HADOOP-15234: - Status: In Progress (was: Patch Available) > NPE when initializing KMSWebApp > --- > > Key: HADOOP-15234 > URL: https://issues.apache.org/jira/browse/HADOOP-15234 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Reporter: Xiao Chen >Assignee: fang zhenyi >Priority: Major > Attachments: HADOOP-15234.001.patch, HADOOP-15234.002.patch, > HADOOP-15234.003.patch, HADOOP-15234.004.patch, HADOOP-15234.005.patch > > > During KMS startup, if the {{keyProvider}} is null, it will NPE inside > KeyProviderExtension. > {noformat} > java.lang.NullPointerException > at > org.apache.hadoop.crypto.key.KeyProviderExtension.(KeyProviderExtension.java:43) > at > org.apache.hadoop.crypto.key.CachingKeyProvider.(CachingKeyProvider.java:93) > at > org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:170) > {noformat} > We're investigating the exact scenario that could lead to this, but the NPE > and log around it can be improved. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings
[ https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16400036#comment-16400036 ] Hudson commented on HADOOP-15305: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13843 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13843/]) HADOOP-15305. Replace FileUtils.writeStringToFile(File, String) with (aajisaka: rev 5e013d50d1a98d37accf8c6b07b14254ad4f3639) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/diskbalancer/datamodel/DiskBalancerCluster.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestShellDecryptionKeyProvider.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/test/java/org/apache/hadoop/yarn/util/TestProcfsBasedProcessTree.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/util/DockerClientConfigHandler.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/resources/TestCGroupsResourceCalculator.java * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServerWebServer.java * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-applications/hadoop-yarn-services/hadoop-yarn-services-core/src/main/java/org/apache/hadoop/yarn/service/utils/PublishedConfigurationOutputter.java > Replace FileUtils.writeStringToFile(File, String) with (File, String, > Charset) to fix deprecation warnings > -- > > Key: HADOOP-15305 > URL: https://issues.apache.org/jira/browse/HADOOP-15305 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: fang zhenyi >Priority: Minor > Labels: newbie > Fix For: 3.2.0 > > Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch > > > FileUtils.writeStringToFile(File, String) relies on default charset and > should be replaced with FileUtils.writeStringToFile(File, String, Charset). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15305) Replace FileUtils.writeStringToFile(File, String) with (File, String, Charset) to fix deprecation warnings
[ https://issues.apache.org/jira/browse/HADOOP-15305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15305: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) +1, committed this to trunk. Thanks [~zhenyi]! > Replace FileUtils.writeStringToFile(File, String) with (File, String, > Charset) to fix deprecation warnings > -- > > Key: HADOOP-15305 > URL: https://issues.apache.org/jira/browse/HADOOP-15305 > Project: Hadoop Common > Issue Type: Bug >Reporter: Akira Ajisaka >Assignee: fang zhenyi >Priority: Minor > Labels: newbie > Fix For: 3.2.0 > > Attachments: HADOOP-15305.001.patch, HADOOP-15305.002.patch > > > FileUtils.writeStringToFile(File, String) relies on default charset and > should be replaced with FileUtils.writeStringToFile(File, String, Charset). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15318) TestQueue fails on Java9
[ https://issues.apache.org/jira/browse/HADOOP-15318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Takanobu Asanuma updated HADOOP-15318: -- Issue Type: Test (was: Sub-task) Parent: (was: HADOOP-11123) > TestQueue fails on Java9 > > > Key: HADOOP-15318 > URL: https://issues.apache.org/jira/browse/HADOOP-15318 > Project: Hadoop Common > Issue Type: Test > Components: test > Environment: Applied HADOOP-12760 and HDFS-11610 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > {noformat} > [INFO] Running org.apache.hadoop.mapred.TestQueue > [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.353 > s <<< FAILURE! - in org.apache.hadoop.mapred.TestQueue > [ERROR] testQueue(org.apache.hadoop.mapred.TestQueue) Time elapsed: 1.186 s > <<< FAILURE! > org.junit.ComparisonFailure: > expected:<...roperties":[{"key":"[capacity","value":"20"},{"key":"user-limit","value":"3]0"}],"children":[]}]...> > but > was:<...roperties":[{"key":"[user-limit","value":"30"},{"key":"capacity","value":"2]0"}],"children":[]}]...> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at org.apache.hadoop.mapred.TestQueue.testQueue(TestQueue.java:156) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15318) TestQueue fails on Java9
[ https://issues.apache.org/jira/browse/HADOOP-15318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399989#comment-16399989 ] Takanobu Asanuma commented on HADOOP-15318: --- Sure, I will do it soon. > TestQueue fails on Java9 > > > Key: HADOOP-15318 > URL: https://issues.apache.org/jira/browse/HADOOP-15318 > Project: Hadoop Common > Issue Type: Sub-task > Components: test > Environment: Applied HADOOP-12760 and HDFS-11610 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > {noformat} > [INFO] Running org.apache.hadoop.mapred.TestQueue > [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.353 > s <<< FAILURE! - in org.apache.hadoop.mapred.TestQueue > [ERROR] testQueue(org.apache.hadoop.mapred.TestQueue) Time elapsed: 1.186 s > <<< FAILURE! > org.junit.ComparisonFailure: > expected:<...roperties":[{"key":"[capacity","value":"20"},{"key":"user-limit","value":"3]0"}],"children":[]}]...> > but > was:<...roperties":[{"key":"[user-limit","value":"30"},{"key":"capacity","value":"2]0"}],"children":[]}]...> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at org.apache.hadoop.mapred.TestQueue.testQueue(TestQueue.java:156) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15317) Improve NetworkTopology chooseRandom's loop
[ https://issues.apache.org/jira/browse/HADOOP-15317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399967#comment-16399967 ] genericqa commented on HADOOP-15317: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 32s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 18s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 13s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HADOOP-15317 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12914633/HADOOP-15317.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 2b2944206738 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3a0f4bc | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/14315/testReport/ | | Max. process+thread count | 1362 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/14315/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Improve NetworkTopology
[jira] [Commented] (HADOOP-15318) TestQueue fails on Java9
[ https://issues.apache.org/jira/browse/HADOOP-15318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16399963#comment-16399963 ] Akira Ajisaka commented on HADOOP-15318: Would you move this issue from HADOOP common to MAPREDUCE? Thanks. > TestQueue fails on Java9 > > > Key: HADOOP-15318 > URL: https://issues.apache.org/jira/browse/HADOOP-15318 > Project: Hadoop Common > Issue Type: Sub-task > Components: test > Environment: Applied HADOOP-12760 and HDFS-11610 >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > {noformat} > [INFO] Running org.apache.hadoop.mapred.TestQueue > [ERROR] Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 1.353 > s <<< FAILURE! - in org.apache.hadoop.mapred.TestQueue > [ERROR] testQueue(org.apache.hadoop.mapred.TestQueue) Time elapsed: 1.186 s > <<< FAILURE! > org.junit.ComparisonFailure: > expected:<...roperties":[{"key":"[capacity","value":"20"},{"key":"user-limit","value":"3]0"}],"children":[]}]...> > but > was:<...roperties":[{"key":"[user-limit","value":"30"},{"key":"capacity","value":"2]0"}],"children":[]}]...> > at org.junit.Assert.assertEquals(Assert.java:115) > at org.junit.Assert.assertEquals(Assert.java:144) > at org.apache.hadoop.mapred.TestQueue.testQueue(TestQueue.java:156) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.base/java.lang.reflect.Method.invoke(Method.java:564) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org