[jira] [Commented] (HADOOP-15721) Disable checkstyle javadoc warnings for test classes
[ https://issues.apache.org/jira/browse/HADOOP-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644467#comment-16644467 ] Dinesh Chitlangia commented on HADOOP-15721: [~shaneku...@gmail.com] I have given this a try. Kindly review patch 001. > Disable checkstyle javadoc warnings for test classes > > > Key: HADOOP-15721 > URL: https://issues.apache.org/jira/browse/HADOOP-15721 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shane Kumpf >Assignee: Dinesh Chitlangia >Priority: Minor > Labels: newbie > Attachments: HADOOP-15721.001.patch > > > The current checkstyle rules with throw a warning if the javadoc is missing > for a test class which is of minimal value. We should consider disabling this > check and allowing contributors to comment test classes as they see fit. > Here is an example: > {code:java} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/MockLinuxContainerRuntime.java:27:public > class MockLinuxContainerRuntime implements LinuxContainerRuntime {: Missing > a Javadoc comment. [JavadocType]{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15721) Disable checkstyle javadoc warnings for test classes
[ https://issues.apache.org/jira/browse/HADOOP-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644466#comment-16644466 ] Hadoop QA commented on HADOOP-15721: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 27m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s{color} | {color:green} hadoop-build-tools in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 41m 20s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15721 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943171/HADOOP-15721.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 39451f6c7b9a 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / edce866 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15338/testReport/ | | Max. process+thread count | 402 (vs. ulimit of 1) | | modules | C: hadoop-build-tools U: hadoop-build-tools | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15338/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Disable checkstyle javadoc warnings for test classes > > > Key: HADOOP-15721 > URL: https://issues.apache.org/jira/browse/HADOOP-15721 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shane Kumpf >Assignee: Dinesh Chitlangia >Priority: Minor > Labels: newbie > Attachments: HADOOP-15721.001.patch > > > The
[jira] [Updated] (HADOOP-15721) Disable checkstyle javadoc warnings for test classes
[ https://issues.apache.org/jira/browse/HADOOP-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia updated HADOOP-15721: --- Attachment: HADOOP-15721.001.patch Status: Patch Available (was: Open) > Disable checkstyle javadoc warnings for test classes > > > Key: HADOOP-15721 > URL: https://issues.apache.org/jira/browse/HADOOP-15721 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shane Kumpf >Assignee: Dinesh Chitlangia >Priority: Minor > Labels: newbie > Attachments: HADOOP-15721.001.patch > > > The current checkstyle rules with throw a warning if the javadoc is missing > for a test class which is of minimal value. We should consider disabling this > check and allowing contributors to comment test classes as they see fit. > Here is an example: > {code:java} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/MockLinuxContainerRuntime.java:27:public > class MockLinuxContainerRuntime implements LinuxContainerRuntime {: Missing > a Javadoc comment. [JavadocType]{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15721) Disable checkstyle javadoc warnings for test classes
[ https://issues.apache.org/jira/browse/HADOOP-15721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Dinesh Chitlangia reassigned HADOOP-15721: -- Assignee: Dinesh Chitlangia > Disable checkstyle javadoc warnings for test classes > > > Key: HADOOP-15721 > URL: https://issues.apache.org/jira/browse/HADOOP-15721 > Project: Hadoop Common > Issue Type: Bug >Reporter: Shane Kumpf >Assignee: Dinesh Chitlangia >Priority: Minor > Labels: newbie > > The current checkstyle rules with throw a warning if the javadoc is missing > for a test class which is of minimal value. We should consider disabling this > check and allowing contributors to comment test classes as they see fit. > Here is an example: > {code:java} > ./hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/test/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/linux/runtime/MockLinuxContainerRuntime.java:27:public > class MockLinuxContainerRuntime implements LinuxContainerRuntime {: Missing > a Javadoc comment. [JavadocType]{code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore
[ https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15323: - Description: Aliyun OSS will support shallow copy which means server will only copy metadata when copy object operation occurs. So, we will improve copy file for AliyunOSSFileSystemStore. (was: Aliyun OSS will support shallow copy which means server will only copy metadata when copy object operation occurs. So, we will improve multiCopy for AliyunOSSFileSystemStore at that time. ) > AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore > - > > Key: HADOOP-15323 > URL: https://issues.apache.org/jira/browse/HADOOP-15323 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > > Aliyun OSS will support shallow copy which means server will only copy > metadata when copy object operation occurs. So, we will improve copy file > for AliyunOSSFileSystemStore. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve copyFile for AliyunOSSFileSystemStore
[ https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15323: - Summary: AliyunOSS: Improve copyFile for AliyunOSSFileSystemStore (was: AliyunOSS: Improve multipartCopy for AliyunOSSFileSystemStore) > AliyunOSS: Improve copyFile for AliyunOSSFileSystemStore > > > Key: HADOOP-15323 > URL: https://issues.apache.org/jira/browse/HADOOP-15323 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > > Aliyun OSS will support shallow copy which means server will only copy > metadata when copy object operation occurs. So, we will improve multiCopy for > AliyunOSSFileSystemStore at that time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15323) AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore
[ https://issues.apache.org/jira/browse/HADOOP-15323?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] wujinhu updated HADOOP-15323: - Summary: AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore (was: AliyunOSS: Improve copyFile for AliyunOSSFileSystemStore) > AliyunOSS: Improve copy file performance for AliyunOSSFileSystemStore > - > > Key: HADOOP-15323 > URL: https://issues.apache.org/jira/browse/HADOOP-15323 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/oss >Reporter: wujinhu >Assignee: wujinhu >Priority: Major > > Aliyun OSS will support shallow copy which means server will only copy > metadata when copy object operation occurs. So, we will improve multiCopy for > AliyunOSSFileSystemStore at that time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644342#comment-16644342 ] Hudson commented on HADOOP-15832: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15163 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15163/]) HADOOP-15832. Upgrade BouncyCastle to 1.60. Contributed by Robert (aajisaka: rev 6fa3feb577d05d73a2eb1bc8e39800326f678c31) * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/pom.xml * (edit) hadoop-project/pom.xml * (edit) hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-app/pom.xml * (edit) hadoop-client-modules/hadoop-client-check-invariants/pom.xml * (edit) hadoop-hdds/server-scm/pom.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/pom.xml * (edit) hadoop-client-modules/hadoop-client-minicluster/pom.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-web-proxy/pom.xml * (edit) hadoop-common-project/hadoop-common/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs-nfs/pom.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/pom.xml * (edit) hadoop-client-modules/hadoop-client-check-test-invariants/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs/pom.xml * (edit) hadoop-ozone/ozone-manager/pom.xml * (edit) hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests/pom.xml * (edit) hadoop-client-modules/hadoop-client-runtime/pom.xml * (edit) hadoop-common-project/hadoop-kms/pom.xml * (edit) hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml > Upgrade BouncyCastle to 1.60 > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644329#comment-16644329 ] Hadoop QA commented on HADOOP-15832: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 2s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 14s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} ozone-manager in trunk failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 58m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} server-scm in trunk failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} ozone-manager in trunk failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 23s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} ozone-manager in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 33s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 23s{color} | {color:red} ozone-manager in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 22s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 53s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 45s{color} | {color:red} ozone-manager in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 18s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 58s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 29s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 19s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 30s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 22s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 27s{color} | {color:green}
[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15832: --- Resolution: Fixed Fix Version/s: 3.3.0 Status: Resolved (was: Patch Available) Committed this to trunk. Thanks again [~rkanter] and [~haibochen]! > Upgrade BouncyCastle to 1.60 > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle to 1.60
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HADOOP-15832: --- Hadoop Flags: Reviewed Summary: Upgrade BouncyCastle to 1.60 (was: Upgrade BouncyCastle) > Upgrade BouncyCastle to 1.60 > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15717) TGT renewal thread does not log IOException
[ https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644299#comment-16644299 ] Hadoop QA commented on HADOOP-15717: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 97m 11s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15717 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943134/HADOOP-15717.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 88a93fe66cd2 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6a39739 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15336/testReport/ | | Max. process+thread count | 1462 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15336/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > TGT renewal thread does
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644291#comment-16644291 ] Akira Ajisaka commented on HADOOP-15832: +1. Thanks [~haibochen] and [~rkanter]. > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644284#comment-16644284 ] Hadoop QA commented on HADOOP-15831: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 52s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 13m 51s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 57m 51s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15831 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943137/HADOOP-15831.03.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 71127c78046d 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6a39739 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15337/testReport/ | | Max. process+thread count | 415 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15337/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Include modificationTime in the toString method of CopyListingFileStatus > > > Key:
[jira] [Commented] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644274#comment-16644274 ] Hadoop QA commented on HADOOP-15835: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 15m 57s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Findbugs executables are not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-2.9 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 16s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 41s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 37s{color} | {color:green} branch-2.9 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} branch-2.9 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 30s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 27s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a226c68 | | JIRA Issue | HADOOP-15835 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943133/HADOOP-15835.001-branch-2.9.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 53684c4913b5 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-2.9 / 590a4e9 | | maven | version: Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) | | Default Java | 1.7.0_181 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15335/testReport/ | | Max. process+thread count | 113 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15335/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-2.9.patch, >
[jira] [Updated] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-15835: - Description: In lieu of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will provide some benefit of MapperObject reuse though not as complete as the JsonSerialization util lazy loading fix. (was: In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will provide some benefit of MapperObject reuse though not as complete as the JsonSerialization util lazy loading fix.) > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-2.9.patch, > HADOOP-15835.001-branch-3.0.patch > > > In lieu of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch > will provide some benefit of MapperObject reuse though not as complete as the > JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644258#comment-16644258 ] Xiao Chen commented on HADOOP-15835: +1, thanks for the backport! > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-2.9.patch, > HADOOP-15835.001-branch-3.0.patch > > > In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will > provide some benefit of MapperObject reuse though not as complete as the > JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644246#comment-16644246 ] Haibo Chen commented on HADOOP-15832: - I'm +1 pushing this in as such. > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644231#comment-16644231 ] Hadoop QA commented on HADOOP-15837: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 21s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 4 new + 5 unchanged - 0 fixed = 9 total (was 5) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 27s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 24s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 12s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15837 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943125/HADOOP-15837-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4cd821dbcf77 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6a39739 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/15334/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | whitespace | https://builds.apache.org/job/PreCommit-HADOOP-Build/15334/artifact/out/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15334/testReport/ | | Max. process+thread count | 339 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-aws U:
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644222#comment-16644222 ] Robert Kanter commented on HADOOP-15832: I just built Hadoop with the patch (with and without shading) in a blank maven repo with no problems, so I'm pretty sure it's something with whatever Jenkins/Yetus is doing. > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15831: Attachment: HADOOP-15831.03.patch > Include modificationTime in the toString method of CopyListingFileStatus > > > Key: HADOOP-15831 > URL: https://issues.apache.org/jira/browse/HADOOP-15831 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-15831.01.patch, HADOOP-15831.02.patch, > HADOOP-15831.03.patch > > > I was looking at a DistCp error observed in hbase backup test: > {code} > 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): > job_local1175594345_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/ > > c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10- > > 57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > 2018-10-08 18:12:03,150 INFO [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: > 1.0 mapProgress: 1.0 > {code} > I noticed that modificationTime was not included in the toString of > CopyListingFileStatus. > I propose including modificationTime so that it is easier to tell when the > respective files last change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15717) TGT renewal thread does not log IOException
[ https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Szilard Nemeth updated HADOOP-15717: Attachment: HADOOP-15717.002.patch > TGT renewal thread does not log IOException > --- > > Key: HADOOP-15717 > URL: https://issues.apache.org/jira/browse/HADOOP-15717 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15717.001.patch, HADOOP-15717.002.patch > > > I came across a case where tgt.getEndTime() was returned null and it resulted > in an NPE, this observation was popped out of a test suite execution on a > cluster. The reason for logging the {{IOException}} is that it helps to > troubleshoot what caused the exception, as it can come from two different > calls from the try-catch. > I can see that [~gabor.bota] handled this with HADOOP-15593, but apart from > logging the fact that the ticket's {{endDate}} was null, we have not logged > the exception at all. > With the current code, the exception is swallowed and the thread terminates > in case the ticket's {{endDate}} is null. > As this can happen with OpenJDK for example, it is required to print the > exception (stack trace, message) to the log. > The code should be updated here: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L918 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15717) TGT renewal thread does not log IOException
[ https://issues.apache.org/jira/browse/HADOOP-15717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644197#comment-16644197 ] Szilard Nemeth commented on HADOOP-15717: - Hi [~xiaochen], [~rkanter]! Oh I see what I had overlooked. Removed the newly added error log and modified the 2 existing error logs to contain the exception. Unfortunately, I had to use String.format, as there's no API from this version of log4j that would support object parameters and exception logging at the same time. Actually, on line 945, the code's intention was to log the exception, but as the signature of the log4j API call is different, it was never logged. The call had less format specifiers in the string, too (4 instead of 5). > TGT renewal thread does not log IOException > --- > > Key: HADOOP-15717 > URL: https://issues.apache.org/jira/browse/HADOOP-15717 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Szilard Nemeth >Assignee: Szilard Nemeth >Priority: Major > Attachments: HADOOP-15717.001.patch > > > I came across a case where tgt.getEndTime() was returned null and it resulted > in an NPE, this observation was popped out of a test suite execution on a > cluster. The reason for logging the {{IOException}} is that it helps to > troubleshoot what caused the exception, as it can come from two different > calls from the try-catch. > I can see that [~gabor.bota] handled this with HADOOP-15593, but apart from > logging the fact that the ticket's {{endDate}} was null, we have not logged > the exception at all. > With the current code, the exception is swallowed and the thread terminates > in case the ticket's {{endDate}} is null. > As this can happen with OpenJDK for example, it is required to print the > exception (stack trace, message) to the log. > The code should be updated here: > https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java#L918 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-15835: - Attachment: HADOOP-15835.001-branch-2.9.patch > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-2.9.patch, > HADOOP-15835.001-branch-3.0.patch > > > In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will > provide some benefit of MapperObject reuse though not as complete as the > JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644168#comment-16644168 ] Robert Kanter commented on HADOOP-15832: I don't think that's related - its failing to compile ozone because it can't find an ozone jar: {noformat} Failure to find org.apache.hadoop:hadoop-ozone-common:jar:0.3.0-SNAPSHOT in https://repository.apache.org/content/repositories/snapshots was cached in the local repository, resolution will not be reattempted until the update interval of apache.snapshots.https has elapsed or updates are forced{noformat} All this patch does to ozone is update the bouncycastle dependency, and it works fine locally, so I'm guessing the Jenkins build is caching something wrong maybe? Any ideas [~ajisakaa] or [~haibochen]? I'm inclined to say this isn't related, but I also don't want to break the build if it's a real problem. > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644163#comment-16644163 ] Hadoop QA commented on HADOOP-15835: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 16m 29s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} branch-3.0 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 52s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 54s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 40s{color} | {color:green} branch-3.0 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} branch-3.0 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 4s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 24s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}101m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:1776208 | | JIRA Issue | HADOOP-15835 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943096/HADOOP-15835.001-branch-3.0.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6cd97d344288 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | branch-3.0 / 8a9f61b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15333/testReport/ | | Max. process+thread count | 317 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15333/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically
[jira] [Updated] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15837: Status: Patch Available (was: Open) > DynamoDB table Update can fail S3A FS init > -- > > Key: HADOOP-15837 > URL: https://issues.apache.org/jira/browse/HADOOP-15837 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: s3guard test with small capacity (10) but autoscale > enabled & multiple consecutive parallel test runs executed...this seems to > have been enough load to trigger the state change >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15837-001.patch > > > When DDB autoscales a table, it goes into an UPDATING state. The > waitForTableActive operation in the AWS SDK doesn't seem to wait long enough > for this to recover. We need to catch & retry -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644159#comment-16644159 ] Steve Loughran commented on HADOOP-15837: - Patch 001. # change case statement so that updating == active as far as init is concerned, consistent with the AWS docs # error handing/retries around waitForTable access, with tests I'd actually implemented change #2 and was testing it before I found the AWS docs which said yes, updating is good to go —the most minimal patch is just that switch statement change. I'm putting this patch here fore review and will then roll it back. Table creation is rare enough in production that retries there aren't a real need; the use case "on-demand-create" is more a test/explorative option. Tested. Yes, but only as part of HADOOP-14556, and not (yet) successfully recreated the updating failure (or it did happen but now it didn't fail) > DynamoDB table Update can fail S3A FS init > -- > > Key: HADOOP-15837 > URL: https://issues.apache.org/jira/browse/HADOOP-15837 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: s3guard test with small capacity (10) but autoscale > enabled & multiple consecutive parallel test runs executed...this seems to > have been enough load to trigger the state change >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15837-001.patch > > > When DDB autoscales a table, it goes into an UPDATING state. The > waitForTableActive operation in the AWS SDK doesn't seem to wait long enough > for this to recover. We need to catch & retry -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15837: Attachment: HADOOP-15837-001.patch > DynamoDB table Update can fail S3A FS init > -- > > Key: HADOOP-15837 > URL: https://issues.apache.org/jira/browse/HADOOP-15837 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: s3guard test with small capacity (10) but autoscale > enabled & multiple consecutive parallel test runs executed...this seems to > have been enough load to trigger the state change >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > Attachments: HADOOP-15837-001.patch > > > When DDB autoscales a table, it goes into an UPDATING state. The > waitForTableActive operation in the AWS SDK doesn't seem to wait long enough > for this to recover. We need to catch & retry -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15826) @Retries annotation of putObject() call & uses wrong
[ https://issues.apache.org/jira/browse/HADOOP-15826?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644134#comment-16644134 ] Steve Loughran commented on HADOOP-15826: - +[~ehiggs] thoughts? > @Retries annotation of putObject() call & uses wrong > > > Key: HADOOP-15826 > URL: https://issues.apache.org/jira/browse/HADOOP-15826 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: HADOOP-15826-001.patch > > > The retry annotations of the S3AFilesystem putObject call and its > writeOperationsHelper use aren't in sync with what the code does. > Fix -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644135#comment-16644135 ] Haibo Chen commented on HADOOP-15832: - I manually kicked off a build at [https://builds.apache.org/job/PreCommit-HADOOP-Build/15329/.|https://builds.apache.org/job/PreCommit-HADOOP-Build/15329/] But looks like it is getting the same issue again. I tried to mvn install -U locally, but could not reproduce either. > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644126#comment-16644126 ] Hadoop QA commented on HADOOP-15836: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 37s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 40 unchanged - 4 fixed = 40 total (was 44) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 30s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 6s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15836 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943092/HADOOP-15836.1.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7163153b59a1 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / bf04f19 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15332/testReport/ | | Max. process+thread count | 1362 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15332/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Review of AccessControlList.java >
[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
[ https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644107#comment-16644107 ] Steve Loughran commented on HADOOP-15834: - Moved the DDB active stack to a self-contained fix HADOOP-15837 > Improve throttling on S3Guard DDB batch retries > --- > > Key: HADOOP-15834 > URL: https://issues.apache.org/jira/browse/HADOOP-15834 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Major > > the batch throttling may fail too fast > if there's batch update of 25 writes but the default retry count is nine > attempts, only nine writes of the batch may be attempted...even if each > attempt is actually successfully writing data. > In contrast, a single write of a piece of data gets the same no. of attempts, > so 25 individual writes can handle a lot more throttling than a bulk write. > Proposed: retry logic to be more forgiving of batch writes, such as not > consider a batch call where at least one data item was written to count as a > failure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644105#comment-16644105 ] Steve Loughran commented on HADOOP-15837: - Proposed fix waitForTableActive wrapped with retry; the failure to exit state triggers this. Docs updated. Give that the table still seems live (AWS Docs confirm this): https://docs.amazonaws.cn/en_us/amazondynamodb/latest/developerguide/WorkingWithTables.Basics.html#WorkingWithTables.Basics.UpdateTable the core solution is simple: update == ready to read. I still want to add some retry checks on waitForTable though, in case the time to come up is > built in wait time, which is clearly pretty small > DynamoDB table Update can fail S3A FS init > -- > > Key: HADOOP-15837 > URL: https://issues.apache.org/jira/browse/HADOOP-15837 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 > Environment: s3guard test with small capacity (10) but autoscale > enabled & multiple consecutive parallel test runs executed...this seems to > have been enough load to trigger the state change >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > When DDB autoscales a table, it goes into an UPDATING state. The > waitForTableActive operation in the AWS SDK doesn't seem to wait long enough > for this to recover. We need to catch & retry -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
[ https://issues.apache.org/jira/browse/HADOOP-15837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644095#comment-16644095 ] Steve Loughran commented on HADOOP-15837: - Stack trace broke S3A create. Running tests *did not seem to fail*. So a client can work with it in updating or active, its just the activation which is failing. {code} ERROR] Tests run: 68, Failures: 0, Errors: 1, Skipped: 4, Time elapsed: 429.837 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations [ERROR] testCreateFlagAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations) Time elapsed: 127.843 s <<< ERROR! java.lang.RuntimeException: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state. at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:464) at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:218) at org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations.setUp(ITestS3AFileContextMainOperations.java:33) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) Caused by: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state. at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:114) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:378) at org.apache.hadoop.fs.DelegateToFileSystem.(DelegateToFileSystem.java:52) at org.apache.hadoop.fs.s3a.S3A.(S3A.java:40) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:135) at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:173) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:258) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:336) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333) at
[jira] [Commented] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644084#comment-16644084 ] Hadoop QA commented on HADOOP-15831: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 15m 8s{color} | {color:red} hadoop-distcp in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 66m 44s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.tools.TestCopyListingFileStatus | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15831 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943097/HADOOP-15831.02.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 52883e8a6cca 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / d5dd6f3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HADOOP-Build/15331/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15331/testReport/ | | Max. process+thread count | 464 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15331/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message
[jira] [Created] (HADOOP-15837) DynamoDB table Update can fail S3A FS init
Steve Loughran created HADOOP-15837: --- Summary: DynamoDB table Update can fail S3A FS init Key: HADOOP-15837 URL: https://issues.apache.org/jira/browse/HADOOP-15837 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.2.0 Environment: s3guard test with small capacity (10) but autoscale enabled & multiple consecutive parallel test runs executed...this seems to have been enough load to trigger the state change Reporter: Steve Loughran Assignee: Steve Loughran When DDB autoscales a table, it goes into an UPDATING state. The waitForTableActive operation in the AWS SDK doesn't seem to wait long enough for this to recover. We need to catch & retry -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644064#comment-16644064 ] Steve Loughran commented on HADOOP-15833: - And this {code} [ERROR] Tests run: 18, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 77.962 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List [ERROR] testListLocatedStatusEmptyDirectory(org.apache.hadoop.fs.s3a.ITestS3AContractGetFileStatusV1List) Time elapsed: 3.547 s <<< FAILURE! java.lang.AssertionError: listLocatedStatus(test dir): directory count in 3 directories and 0 files expected:<1> but was:<3> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.apache.hadoop.fs.contract.ContractTestUtils$TreeScanResults.assertSizeEquals(ContractTestUtils.java:1645) at org.apache.hadoop.fs.contract.AbstractContractGetFileStatusTest.testListLocatedStatusEmptyDirectory(AbstractContractGetFileStatusTest.java:131) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > Intermittent failures of some S3A tests with S3Guard in parallel test runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15798) ITestS3GuardListConsistency#testConsistentListAfterDelete failing with LocalMetadataStore
[ https://issues.apache.org/jira/browse/HADOOP-15798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644062#comment-16644062 ] Steve Loughran commented on HADOOP-15798: - I've now a failure earlier in the same test case with s3guard + auth in HADOOP-15833. Adding the diagnostics. > ITestS3GuardListConsistency#testConsistentListAfterDelete failing with > LocalMetadataStore > - > > Key: HADOOP-15798 > URL: https://issues.apache.org/jira/browse/HADOOP-15798 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Minor > > Test fails constantly when running with LocalMetadataStore. > {noformat} > java.lang.AssertionError > at org.junit.Assert.fail(Assert.java:86) > at org.junit.Assert.assertTrue(Assert.java:41) > at org.junit.Assert.assertFalse(Assert.java:64) > at org.junit.Assert.assertFalse(Assert.java:74) > at > org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:205) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15833: Summary: Intermittent failures of some S3A tests with S3Guard in parallel test runs (was: ITestS3GuardToolDynamoDB fails intermittently in parallel runs) > Intermittent failures of some S3A tests with S3Guard in parallel test runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-15833) Intermittent failures of some S3A tests with S3Guard in parallel test runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-15833 started by Steve Loughran. --- > Intermittent failures of some S3A tests with S3Guard in parallel test runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644032#comment-16644032 ] Steve Loughran commented on HADOOP-15833: - {code} [ERROR] Tests run: 9, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 127.117 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency [ERROR] testConsistentListAfterDelete(org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency) Time elapsed: 3.501 s <<< ERROR! org.apache.hadoop.fs.PathIsNotEmptyDirectoryException: `s3a://hwdev-steve-ireland-new/fork-0002/test/a/b/dir1': Directory is not empty at org.apache.hadoop.fs.s3a.S3AFileSystem.innerDelete(S3AFileSystem.java:1901) at org.apache.hadoop.fs.s3a.S3AFileSystem.delete(S3AFileSystem.java:1846) at org.apache.hadoop.fs.s3a.ITestS3GuardListConsistency.testConsistentListAfterDelete(ITestS3GuardListConsistency.java:197) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643417#comment-16643417 ] Steve Loughran edited comment on HADOOP-15833 at 10/9/18 8:09 PM: -- {{ITestS3GuardToolDynamoDB.testPruneCommandCLI}} timing out after 60s on parallel runs (I keep them small to force failures here, looks like the retry stuff is working so well tests time out). If there's a large amount of data to prune, maybe it's taking too long. Proposal: increase test timeout to scale timeout. (I know, I could just increase my ddb size, but I want to make sure it will eventually complete here even on retries) Ine {code} [ERROR] testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 600.03 s <<< ERROR! java.lang.Exception: test timed out after 60 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:813) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:765) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerPut(DynamoDBMetadataStore.java:851) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.removeAuthoritativeDirFlag(DynamoDBMetadataStore.java:1080) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:1033) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:993) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommand(AbstractS3GuardToolTestBase.java:271) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommandCLI(AbstractS3GuardToolTestBase.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} was (Author: ste...@apache.org): {testPruneCommandCLI}} timing out after 60s on parallel runs (I keep them small to force failures here, looks like the retry stuff is working so well tests time out). If there's a large amount of data to prune, maybe it's taking too long. Proposal: increase test timeout to scale timeout. (I know, I could just increase my ddb size, but I want to make sure it will eventually complete here even on retries) Ine {code} [ERROR] testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 600.03 s <<< ERROR! java.lang.Exception: test timed out after 60 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:813) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:765) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerPut(DynamoDBMetadataStore.java:851) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.removeAuthoritativeDirFlag(DynamoDBMetadataStore.java:1080) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:1033) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:993) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommand(AbstractS3GuardToolTestBase.java:271) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommandCLI(AbstractS3GuardToolTestBase.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644025#comment-16644025 ] Hadoop QA commented on HADOOP-14445: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 52s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 1s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 46s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 30s{color} | {color:orange} root: The patch generated 3 new + 518 unchanged - 10 fixed = 521 total (was 528) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 47s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 4s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 9m 41s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 11s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 51s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}123m 7s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.ha.TestZKFailoverController | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-14445 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943078/HADOOP-14445.19.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b0afb9fa31b9 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality |
[jira] [Updated] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-15835: - Status: Patch Available (was: Open) > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-3.0.patch > > > In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will > provide some benefit of MapperObject reuse though not as complete as the > JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using OAuth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16644005#comment-16644005 ] Hudson commented on HADOOP-15825: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15154 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/15154/]) HADOOP-15825. ABFS: Enable some tests for namespace not enabled account (stevel: rev bd50fa956b1ca25bb2136977b98a6aa6895eff8b) * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFinalize.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemInitAndCreate.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemRandomRead.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFileStatus.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestWasbAbfsCompatibility.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemFlush.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemBackCompat.java > ABFS: Enable some tests for namespace not enabled account using OAuth > - > > Key: HADOOP-15825 > URL: https://issues.apache.org/jira/browse/HADOOP-15825 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-15825-001.patch, HADOOP-15825-002.patch > > > When testing namespace not enabled account using Oauth, some tests were > skipped. So need to update the tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15831: Attachment: HADOOP-15831.02.patch > Include modificationTime in the toString method of CopyListingFileStatus > > > Key: HADOOP-15831 > URL: https://issues.apache.org/jira/browse/HADOOP-15831 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-15831.01.patch, HADOOP-15831.02.patch > > > I was looking at a DistCp error observed in hbase backup test: > {code} > 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): > job_local1175594345_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/ > > c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10- > > 57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > 2018-10-08 18:12:03,150 INFO [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: > 1.0 mapProgress: 1.0 > {code} > I noticed that modificationTime was not included in the toString of > CopyListingFileStatus. > I propose including modificationTime so that it is easier to tell when the > respective files last change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
[ https://issues.apache.org/jira/browse/HADOOP-15835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-15835: - Attachment: HADOOP-15835.001-branch-3.0.patch > Reuse Object Mapper in KMSJSONWriter > > > Key: HADOOP-15835 > URL: https://issues.apache.org/jira/browse/HADOOP-15835 > Project: Hadoop Common > Issue Type: Bug >Reporter: Jonathan Eagles >Assignee: Jonathan Eagles >Priority: Major > Attachments: HADOOP-15835.001-branch-3.0.patch > > > In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will > provide some benefit of MapperObject reuse though not as complete as the > JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15836: - Status: Patch Available (was: Open) > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15836: - Attachment: HADOOP-15836.1.patch > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15836.1.patch > > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15836) Review of AccessControlList.java
BELUGA BEHR created HADOOP-15836: Summary: Review of AccessControlList.java Key: HADOOP-15836 URL: https://issues.apache.org/jira/browse/HADOOP-15836 Project: Hadoop Common Issue Type: Improvement Components: common, security Affects Versions: 3.2.0 Reporter: BELUGA BEHR * Improve unit tests (expected / actual were backwards) * Unit test expected elements to be in order but the class's return Collections were unordered * Formatting cleanup * Removed superfluous white space * Remove use of LinkedList * Removed superfluous code * Use {{unmodifiable}} Collections where JavaDoc states that caller must not manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-15836) Review of AccessControlList.java
[ https://issues.apache.org/jira/browse/HADOOP-15836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR reassigned HADOOP-15836: Assignee: BELUGA BEHR > Review of AccessControlList.java > > > Key: HADOOP-15836 > URL: https://issues.apache.org/jira/browse/HADOOP-15836 > Project: Hadoop Common > Issue Type: Improvement > Components: common, security >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > > * Improve unit tests (expected / actual were backwards) > * Unit test expected elements to be in order but the class's return > Collections were unordered > * Formatting cleanup > * Removed superfluous white space > * Remove use of LinkedList > * Removed superfluous code > * Use {{unmodifiable}} Collections where JavaDoc states that caller must not > manipulate the data structure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643958#comment-16643958 ] Steve Loughran commented on HADOOP-15831: - Straightforward. Can you add a test which creates an instance & calls toString on it uninitialized and then initialzied? May seem overkill but in the past we (I) have written code which NPEs on a toString() if its not fully inited. Example, {{Token.toString()}}. Thanks > Include modificationTime in the toString method of CopyListingFileStatus > > > Key: HADOOP-15831 > URL: https://issues.apache.org/jira/browse/HADOOP-15831 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-15831.01.patch > > > I was looking at a DistCp error observed in hbase backup test: > {code} > 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): > job_local1175594345_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/ > > c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10- > > 57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > 2018-10-08 18:12:03,150 INFO [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: > 1.0 mapProgress: 1.0 > {code} > I noticed that modificationTime was not included in the toString of > CopyListingFileStatus. > I propose including modificationTime so that it is easier to tell when the > respective files last change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15550) Avoid static initialization of ObjectMappers
[ https://issues.apache.org/jira/browse/HADOOP-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643956#comment-16643956 ] Jonathan Eagles commented on HADOOP-15550: -- Going ahead on the proposed plan. I have cherry-picked this commit to branch-3.1. Created HADOOP-15835 to gain a partial benefit of ObjectMapper reuse. > Avoid static initialization of ObjectMappers > > > Key: HADOOP-15550 > URL: https://issues.apache.org/jira/browse/HADOOP-15550 > Project: Hadoop Common > Issue Type: Bug > Components: performance >Affects Versions: 3.2.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Minor > Fix For: 3.2.0, 3.1.2 > > Attachments: hadoop-15550.txt, hadoop-15550.txt, hadoop-15550.txt, > hadoop-15550.txt, hadoop-15550.txt > > > Various classes statically initialize an ObjectMapper READER instance. This > ends up doing a bunch of class-loading of Jackson libraries that can add up > to a fair amount of CPU, even if the reader ends up not being used. This is > particularly the case with WebHdfsFileSystem, which is class-loaded by a > serviceloader even when unused in a particular job. We should lazy-init these > members instead of doing so as a static class member. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15835) Reuse Object Mapper in KMSJSONWriter
Jonathan Eagles created HADOOP-15835: Summary: Reuse Object Mapper in KMSJSONWriter Key: HADOOP-15835 URL: https://issues.apache.org/jira/browse/HADOOP-15835 Project: Hadoop Common Issue Type: Bug Reporter: Jonathan Eagles Assignee: Jonathan Eagles In lie of HADOOP-15550 in branch-3.0, branch-2.9, branch-2.8. This patch will provide some benefit of MapperObject reuse though not as complete as the JsonSerialization util lazy loading fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using OAuth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15825: Resolution: Fixed Status: Resolved (was: Patch Available) +1 resolved, thanks > ABFS: Enable some tests for namespace not enabled account using OAuth > - > > Key: HADOOP-15825 > URL: https://issues.apache.org/jira/browse/HADOOP-15825 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-15825-001.patch, HADOOP-15825-002.patch > > > When testing namespace not enabled account using Oauth, some tests were > skipped. So need to update the tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using OAuth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15825: Summary: ABFS: Enable some tests for namespace not enabled account using OAuth (was: ABFS: Enable some tests for namespace not enabled account using Oauth) > ABFS: Enable some tests for namespace not enabled account using OAuth > - > > Key: HADOOP-15825 > URL: https://issues.apache.org/jira/browse/HADOOP-15825 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-15825-001.patch, HADOOP-15825-002.patch > > > When testing namespace not enabled account using Oauth, some tests were > skipped. So need to update the tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15550) Avoid static initialization of ObjectMappers
[ https://issues.apache.org/jira/browse/HADOOP-15550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Eagles updated HADOOP-15550: - Fix Version/s: 3.1.2 > Avoid static initialization of ObjectMappers > > > Key: HADOOP-15550 > URL: https://issues.apache.org/jira/browse/HADOOP-15550 > Project: Hadoop Common > Issue Type: Bug > Components: performance >Affects Versions: 3.2.0 >Reporter: Todd Lipcon >Assignee: Todd Lipcon >Priority: Minor > Fix For: 3.2.0, 3.1.2 > > Attachments: hadoop-15550.txt, hadoop-15550.txt, hadoop-15550.txt, > hadoop-15550.txt, hadoop-15550.txt > > > Various classes statically initialize an ObjectMapper READER instance. This > ends up doing a bunch of class-loading of Jackson libraries that can add up > to a fair amount of CPU, even if the reader ends up not being used. This is > particularly the case with WebHdfsFileSystem, which is class-loaded by a > serviceloader even when unused in a particular job. We should lazy-init these > members instead of doing so as a static class member. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643925#comment-16643925 ] Hadoop QA commented on HADOOP-15831: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 3s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 12m 57s{color} | {color:green} hadoop-distcp in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 10s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15831 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943074/HADOOP-15831.01.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux dee3ee08b198 4.4.0-133-generic #159-Ubuntu SMP Fri Aug 10 07:31:43 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c3d22d3 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15327/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15327/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Include modificationTime in the toString method
[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
[ https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643923#comment-16643923 ] Steve Loughran commented on HADOOP-15834: - assuming that exists but inactive == capacity reallocation, we should catch & log and use the batch retry policy. Key point: we can/should wait more than just the SDK. > Improve throttling on S3Guard DDB batch retries > --- > > Key: HADOOP-15834 > URL: https://issues.apache.org/jira/browse/HADOOP-15834 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Priority: Major > > the batch throttling may fail too fast > if there's batch update of 25 writes but the default retry count is nine > attempts, only nine writes of the batch may be attempted...even if each > attempt is actually successfully writing data. > In contrast, a single write of a piece of data gets the same no. of attempts, > so 25 individual writes can handle a lot more throttling than a bulk write. > Proposed: retry logic to be more forgiving of batch writes, such as not > consider a batch call where at least one data item was written to count as a > failure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643906#comment-16643906 ] Sean Mackrory commented on HADOOP-15819: So I think it was mentioned in the other JIRA that each test should be in their own JVM, but if we're seeing FS's in one test that were already closed, and we get a stack trace that points to another test, then something isn't right. Not closing the fs in tests seems like a possible solution, but (a) we need to rule out the possibility that this is a bug in the actual product, and (b) closing the FS does multiple important things and duplicating that everywhere isn't just a Band-Aid, it's kind of a messy one. The FS cache really feels inherently broken in the parallel tests case, which is why I initially liked the idea of disabling caching for the tests. If you rely on the FileSystem class to either give you an existing instance or create one for you, then we need to rely on it to decide when to close it. Otherwise people either break what other threads are doing or they don't close their FS instances. Both of those are horrible. Some kind of ref-counting, maybe? But modifying the design of the FS cache scares me, because it's been that way for a long time and is kind of important. Which raises the question: if I'm seeing this all the time recently but none of us saw it before a little while ago - what broke it? Because I don't think it was the FS caching. I wonder if git-bisect might be a useful way to get a lead here. > S3A integration test failures: FileSystem is closed! - without parallel test > run > > > Key: HADOOP-15819 > URL: https://issues.apache.org/jira/browse/HADOOP-15819 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Attachments: S3ACloseEnforcedFileSystem.java, > closed_fs_closers_example_5klines.log.zip > > > Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against > Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these > failures: > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob) > Time elapsed: 0.027 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob > [ERROR] > testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.021 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.022 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest) > Time elapsed: 0.023 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob > [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob) > Time elapsed: 0.039 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory > [ERROR] > testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory) > Time elapsed: 0.014 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > {noformat} > The big issue is that the tests are running in a serial manner - no test is > running on top of the other - so we should not see that the tests are failing > like this. The issue could be in how we handle > org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same > S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B > test will get the same FileSystem object from the cache and try to use it, > but it
[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated HADOOP-15832: --- Affects Version/s: (was: 3.2.0) 3.3.0 > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Kanter updated HADOOP-15832: --- Target Version/s: 3.3.0 (was: 3.2.0) > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.3.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643877#comment-16643877 ] Robert Kanter commented on HADOOP-15832: I'm not sure why it ran twice, but the test failures seem unrelated. I ran them all on my machine and they all passed except for {{TestNameNodeMetadataConsistency}}, but that also fails on trunk (looks like HDFS-11396 is already open for that). > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15832) Upgrade BouncyCastle
[ https://issues.apache.org/jira/browse/HADOOP-15832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643877#comment-16643877 ] Robert Kanter edited comment on HADOOP-15832 at 10/9/18 6:16 PM: - I'm not sure why it ran twice, but the test failures seem unrelated. I ran them all on my machine and they all passed except for {{TestNameNodeMetadataConsistency}}, but that also fails without the patch (looks like HDFS-11396 is already open for that). was (Author: rkanter): I'm not sure why it ran twice, but the test failures seem unrelated. I ran them all on my machine and they all passed except for {{TestNameNodeMetadataConsistency}}, but that also fails on trunk (looks like HDFS-11396 is already open for that). > Upgrade BouncyCastle > > > Key: HADOOP-15832 > URL: https://issues.apache.org/jira/browse/HADOOP-15832 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0 >Reporter: Robert Kanter >Assignee: Robert Kanter >Priority: Major > Attachments: HADOOP-15832.001.patch > > > As part of my work on YARN-6586, I noticed that we're using a very old > version of BouncyCastle: > {code:xml} > >org.bouncycastle >bcprov-jdk16 >1.46 >test > > {code} > The *-jdk16 artifacts have been discontinued and are not recommended (see > [http://bouncy-castle.1462172.n4.nabble.com/Bouncycaslte-bcprov-jdk15-vs-bcprov-jdk16-td4656252.html]). > > In particular, the newest release, 1.46, is from {color:#FF}2011{color}! > [https://mvnrepository.com/artifact/org.bouncycastle/bcprov-jdk16] > The currently maintained and recommended artifacts are *-jdk15on: > [https://www.bouncycastle.org/latest_releases.html] > They're currently on version 1.60, released only a few months ago. > We should update BouncyCastle to the *-jdk15on artifacts and the 1.60 > release. It's currently a test-only artifact, so there should be no > backwards-compatibility issues with updating this. It's also needed for > YARN-6586, where we'll actually be shipping it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using Oauth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643839#comment-16643839 ] Hadoop QA commented on HADOOP-15825: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 28s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 8 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 56m 53s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15825 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943066/HADOOP-15825-002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 594da9ae579f 3.13.0-144-generic #193-Ubuntu SMP Thu Mar 15 17:03:53 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 600438b | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15326/testReport/ | | Max. process+thread count | 363 (vs. ulimit of 1) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15326/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > ABFS: Enable some tests for namespace not enabled account using Oauth > - > > Key:
[jira] [Commented] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643813#comment-16643813 ] Xiao Chen commented on HADOOP-14445: [^HADOOP-14445.19.patch] Thanks for the review [~ajayydv]. bq. canonicalService field in LoadBalancingKMSClientProvider The canonicalService is used for token selection. We depend on LBKMSCP and KMSCP's canonicalService to handle all combinations of token look up, so both are needed. You can try debug this with a token with client and a new client to see how it works differently. :) Good idea on the additional test coverage. Updated in patch 19. > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, > HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, > HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, > HADOOP-14445.18.patch, HADOOP-14445.19.patch, > HADOOP-14445.branch-2.000.precommit.patch, > HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, > HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, > HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, > HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, > HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, > HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, > HADOOP-14445.compat.patch, HADOOP-14445.revert.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-14445) Delegation tokens are not shared between KMS instances
[ https://issues.apache.org/jira/browse/HADOOP-14445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HADOOP-14445: --- Attachment: HADOOP-14445.19.patch > Delegation tokens are not shared between KMS instances > -- > > Key: HADOOP-14445 > URL: https://issues.apache.org/jira/browse/HADOOP-14445 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 2.8.0, 3.0.0-alpha1 > Environment: CDH5.7.4, Kerberized, SSL, KMS-HA, at rest encryption >Reporter: Wei-Chiu Chuang >Assignee: Xiao Chen >Priority: Major > Attachments: HADOOP-14445-branch-2.8.002.patch, > HADOOP-14445-branch-2.8.patch, HADOOP-14445.002.patch, > HADOOP-14445.003.patch, HADOOP-14445.004.patch, HADOOP-14445.05.patch, > HADOOP-14445.06.patch, HADOOP-14445.07.patch, HADOOP-14445.08.patch, > HADOOP-14445.09.patch, HADOOP-14445.10.patch, HADOOP-14445.11.patch, > HADOOP-14445.12.patch, HADOOP-14445.13.patch, HADOOP-14445.14.patch, > HADOOP-14445.15.patch, HADOOP-14445.16.patch, HADOOP-14445.17.patch, > HADOOP-14445.18.patch, HADOOP-14445.19.patch, > HADOOP-14445.branch-2.000.precommit.patch, > HADOOP-14445.branch-2.001.precommit.patch, HADOOP-14445.branch-2.01.patch, > HADOOP-14445.branch-2.02.patch, HADOOP-14445.branch-2.03.patch, > HADOOP-14445.branch-2.04.patch, HADOOP-14445.branch-2.05.patch, > HADOOP-14445.branch-2.06.patch, HADOOP-14445.branch-2.8.003.patch, > HADOOP-14445.branch-2.8.004.patch, HADOOP-14445.branch-2.8.005.patch, > HADOOP-14445.branch-2.8.006.patch, HADOOP-14445.branch-2.8.revert.patch, > HADOOP-14445.compat.patch, HADOOP-14445.revert.patch > > > As discovered in HADOOP-14441, KMS HA using LoadBalancingKMSClientProvider do > not share delegation tokens. (a client uses KMS address/port as the key for > delegation token) > {code:title=DelegationTokenAuthenticatedURL#openConnection} > if (!creds.getAllTokens().isEmpty()) { > InetSocketAddress serviceAddr = new InetSocketAddress(url.getHost(), > url.getPort()); > Text service = SecurityUtil.buildTokenService(serviceAddr); > dToken = creds.getToken(service); > {code} > But KMS doc states: > {quote} > Delegation Tokens > Similar to HTTP authentication, KMS uses Hadoop Authentication for delegation > tokens too. > Under HA, A KMS instance must verify the delegation token given by another > KMS instance, by checking the shared secret used to sign the delegation > token. To do this, all KMS instances must be able to retrieve the shared > secret from ZooKeeper. > {quote} > We should either update the KMS documentation, or fix this code to share > delegation tokens. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643749#comment-16643749 ] Gabor Bota edited comment on HADOOP-15819 at 10/9/18 5:37 PM: -- Also note: I was getting interesting errors when I removed the line, maybe related that we haven't closed the fs: {noformat} ERROR] Tests run: 9, Failures: 0, Errors: 9, Skipped: 0, Time elapsed: 0.778 s <<< FAILURE! - in org.apache.hadoop .fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.085 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0. 082 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.08 s <<< E RROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.084 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRoo tDir.java:63) [ERROR] testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.078 s < << ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.079 s << < ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testMkDirDepth1(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.108 s <<< ERRO R! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.083 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string (..) {noformat} So there should be another solution to this that won't break the tests but also won't cause the {{FileSystem is closed!}} issue. Running this test class separately all 9 tests are passing. Could we disable the caching just for these test and have them run on a new FS instance? (I get other issues as well when running the tests without closing the fs. I think there were many reasons the closing the fs, but I missed the history.) was (Author: gabor.bota): Also note: I was getting interesting errors when I removed the line, maybe related that we haven't closed the fs: {noformat} ERROR] Tests run: 9, Failures: 0, Errors: 9, Skipped: 0, Time elapsed: 0.778 s <<< FAILURE! - in org.apache.hadoop .fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.085 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0. 082 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.08 s <<< E RROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.084 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRoo tDir.java:63) [ERROR] testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.078 s < << ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.079 s << < ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testMkDirDepth1(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.108 s <<< ERRO R! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.083 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string (..) {noformat} So there should be another solution to
[jira] [Updated] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15831: Assignee: Ted Yu Status: Patch Available (was: Open) > Include modificationTime in the toString method of CopyListingFileStatus > > > Key: HADOOP-15831 > URL: https://issues.apache.org/jira/browse/HADOOP-15831 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Assignee: Ted Yu >Priority: Minor > Attachments: HADOOP-15831.01.patch > > > I was looking at a DistCp error observed in hbase backup test: > {code} > 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): > job_local1175594345_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/ > > c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10- > > 57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > 2018-10-08 18:12:03,150 INFO [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: > 1.0 mapProgress: 1.0 > {code} > I noticed that modificationTime was not included in the toString of > CopyListingFileStatus. > I propose including modificationTime so that it is easier to tell when the > respective files last change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15831) Include modificationTime in the toString method of CopyListingFileStatus
[ https://issues.apache.org/jira/browse/HADOOP-15831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ted Yu updated HADOOP-15831: Attachment: HADOOP-15831.01.patch > Include modificationTime in the toString method of CopyListingFileStatus > > > Key: HADOOP-15831 > URL: https://issues.apache.org/jira/browse/HADOOP-15831 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Ted Yu >Priority: Minor > Attachments: HADOOP-15831.01.patch > > > I was looking at a DistCp error observed in hbase backup test: > {code} > 2018-10-08 18:12:03,067 WARN [Thread-933] mapred.LocalJobRunner$Job(590): > job_local1175594345_0004 > java.io.IOException: Inconsistent sequence file: current chunk file > org.apache.hadoop.tools.CopyListingFileStatus@7ac56817{hdfs://localhost:41712/user/hbase/test-data/ > > c0f6352c-cf39-bbd1-7d10-57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/f565f49046b04eecbf8d129eac7a7b88_SeqId_205_ > length = 5100 aclEntries = null, xAttrs = null} doesnt match prior entry > org.apache.hadoop.tools.CopyListingFileStatus@7aa4deb2{hdfs://localhost:41712/user/hbase/test-data/c0f6352c-cf39-bbd1-7d10- > > 57a9c01e7ce9/data/default/test-1539022262249/be1bf5445faddb63e45726410a07917a/f/41b6cb64bae64cbcac47d1fd9aae59f4_SeqId_205_ > length = 5142 aclEntries = null, xAttrs = null} > at > org.apache.hadoop.tools.mapred.CopyCommitter.concatFileChunks(CopyCommitter.java:276) > at > org.apache.hadoop.tools.mapred.CopyCommitter.commitJob(CopyCommitter.java:100) > at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:567) > 2018-10-08 18:12:03,150 INFO [Time-limited test] > mapreduce.MapReduceBackupCopyJob$BackupDistCp(226): Progress: 100.0% subTask: > 1.0 mapProgress: 1.0 > {code} > I noticed that modificationTime was not included in the toString of > CopyListingFileStatus. > I propose including modificationTime so that it is easier to tell when the > respective files last change. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-14556) S3A to support Delegation Tokens
[ https://issues.apache.org/jira/browse/HADOOP-14556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643754#comment-16643754 ] Hadoop QA commented on HADOOP-14556: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 47s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 35 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 49s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 24s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 46s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 17m 50s{color} | {color:green} root generated 0 new + 1326 unchanged - 1 fixed = 1326 total (was 1327) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 58s{color} | {color:orange} root: The patch generated 33 new + 119 unchanged - 7 fixed = 152 total (was 126) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 105 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 1 unchanged - 0 fixed = 2 total (was 1) {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 20s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 49s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}103m 36s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 47s{color} | {color:green} hadoop-aws in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 49s{color} | {color:green} The patch does not
[jira] [Comment Edited] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643749#comment-16643749 ] Gabor Bota edited comment on HADOOP-15819 at 10/9/18 5:10 PM: -- Also note: I was getting interesting errors when I removed the line, maybe related that we haven't closed the fs: {noformat} ERROR] Tests run: 9, Failures: 0, Errors: 9, Skipped: 0, Time elapsed: 0.778 s <<< FAILURE! - in org.apache.hadoop .fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.085 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0. 082 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.08 s <<< E RROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.084 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRoo tDir.java:63) [ERROR] testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.078 s < << ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.079 s << < ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testMkDirDepth1(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.108 s <<< ERRO R! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.083 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string (..) {noformat} So there should be another solution to this that won't break the tests but also won't cause the {{FileSystem is closed!}} issue. Running this test class separately all 9 tests are passing. Could we disable the caching just for these test and have them run on a new FS instance? was (Author: gabor.bota): Also note: I was getting interesting errors when I removed the line, maybe related that we haven't closed the fs: {noformat} ERROR] Tests run: 9, Failures: 0, Errors: 9, Skipped: 0, Time elapsed: 0.778 s <<< FAILURE! - in org.apache.hadoop .fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.085 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0. 082 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.08 s <<< E RROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.084 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRoo tDir.java:63) [ERROR] testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.078 s < << ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.079 s << < ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testMkDirDepth1(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.108 s <<< ERRO R! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.083 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string (..) {noformat} So there should be another solution to this that won't break the tests but also won't cause the {{FileSystem is closed!}} issue. Could we disable the caching just for these test and have them run
[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643749#comment-16643749 ] Gabor Bota commented on HADOOP-15819: - Also note: I was getting interesting errors when I removed the line, maybe related that we haven't closed the fs: {noformat} ERROR] Tests run: 9, Failures: 0, Errors: 9, Skipped: 0, Time elapsed: 0.778 s <<< FAILURE! - in org.apache.hadoop .fs.contract.s3a.ITestS3AContractRootDir [ERROR] testRecursiveRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.085 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirNonRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0. 082 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmRootRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.08 s <<< E RROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testListEmptyRootDirectory(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.084 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string at org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir.testListEmptyRootDirectory(ITestS3AContractRoo tDir.java:63) [ERROR] testCreateFileOverRoot(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.078 s < << ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testSimpleRootListing(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.079 s << < ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testMkDirDepth1(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.108 s <<< ERRO R! java.lang.IllegalArgumentException: Can not create a Path from an empty string [ERROR] testRmEmptyRootDirRecursive(org.apache.hadoop.fs.contract.s3a.ITestS3AContractRootDir) Time elapsed: 0.083 s <<< ERROR! java.lang.IllegalArgumentException: Can not create a Path from an empty string (..) {noformat} So there should be another solution to this that won't break the tests but also won't cause the {{FileSystem is closed!}} issue. Could we disable the caching just for these test and have them run on a new FS instance? > S3A integration test failures: FileSystem is closed! - without parallel test > run > > > Key: HADOOP-15819 > URL: https://issues.apache.org/jira/browse/HADOOP-15819 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Attachments: S3ACloseEnforcedFileSystem.java, > closed_fs_closers_example_5klines.log.zip > > > Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against > Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these > failures: > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob) > Time elapsed: 0.027 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob > [ERROR] > testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.021 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.022 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest) > Time elapsed: 0.023 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob > [ERROR]
[jira] [Commented] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643751#comment-16643751 ] Hadoop QA commented on HADOOP-15828: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 33s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 23m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 19s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 31 unchanged - 6 fixed = 31 total (was 37) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 57s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 47s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 40s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}122m 48s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-15828 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12943044/HADOOP-15828.3.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 347bb373a499 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 9e9915d | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15325/testReport/ | | Max. process+thread count | 1371 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15325/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Review of MachineList class > --- > >
[jira] [Commented] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643742#comment-16643742 ] Gabor Bota commented on HADOOP-15819: - If I remove {{IOUtils.closeStream(getFileSystem());}} from org/apache/hadoop/fs/s3a/AbstractS3ATestBase.java:55, it fixes the issue. So the teardown would be {code:java} @Override public void teardown() throws Exception { super.teardown(); } {code} instead of {code:java} @Override public void teardown() throws Exception { super.teardown(); describe("closing file system"); IOUtils.closeStream(getFileSystem()); } {code} so even we could skip the override. We could even do something like {code:java} @Override public void teardown() throws Exception { super.teardown(); if(getConfiguration().getBoolean(FS_S3A_IMPL_DISABLE_CACHE, false)){ describe("closing file system"); IOUtils.closeStream(getFileSystem()); } } {code} but I was still getting some {{FileSystem is closed!}} when I used that method for closing the fs. I wanted to get to the bottom of this, so checked how fs caching works in general. We have a static fs cache in org.apache.hadoop.fs.FileSystem, that we use for cache fs instances. If we don't have an fs instance for the schema provided (e.g s3a://) we create a new instance and store it in the static cache - "If this is the first entry in the map and the JVM is not shutting down this registers a shutdown hook to close filesystems, and adds this FS to the toAutoClose if '{{fs.automatic.close}}' is set in the configuration (default: true).", so I don't see why do we have to close it manually between each an every test. What's your opinion on this [~ste...@apache.org]? > S3A integration test failures: FileSystem is closed! - without parallel test > run > > > Key: HADOOP-15819 > URL: https://issues.apache.org/jira/browse/HADOOP-15819 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Attachments: S3ACloseEnforcedFileSystem.java, > closed_fs_closers_example_5klines.log.zip > > > Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against > Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these > failures: > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob) > Time elapsed: 0.027 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob > [ERROR] > testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.021 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.022 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest) > Time elapsed: 0.023 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob > [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob) > Time elapsed: 0.039 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory > [ERROR] > testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory) > Time elapsed: 0.014 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > {noformat} > The big issue is that the tests are running in a serial manner - no test is > running on top of the other - so we should not see that the tests are failing > like this. The issue could be in how we handle >
[jira] [Commented] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using Oauth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643734#comment-16643734 ] Da Zhou commented on HADOOP-15825: -- Submitting 002 patch: - removed unused imports. > ABFS: Enable some tests for namespace not enabled account using Oauth > - > > Key: HADOOP-15825 > URL: https://issues.apache.org/jira/browse/HADOOP-15825 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-15825-001.patch, HADOOP-15825-002.patch > > > When testing namespace not enabled account using Oauth, some tests were > skipped. So need to update the tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15825) ABFS: Enable some tests for namespace not enabled account using Oauth
[ https://issues.apache.org/jira/browse/HADOOP-15825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou updated HADOOP-15825: - Attachment: HADOOP-15825-002.patch > ABFS: Enable some tests for namespace not enabled account using Oauth > - > > Key: HADOOP-15825 > URL: https://issues.apache.org/jira/browse/HADOOP-15825 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.0 >Reporter: Da Zhou >Assignee: Da Zhou >Priority: Major > Attachments: HADOOP-15825-001.patch, HADOOP-15825-002.patch > > > When testing namespace not enabled account using Oauth, some tests were > skipped. So need to update the tests. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15830) Server.java Prefer ArrayList
[ https://issues.apache.org/jira/browse/HADOOP-15830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643723#comment-16643723 ] Íñigo Goiri commented on HADOOP-15830: -- This part of the code is pretty close to the core so I'd like to get other reviewers to chime in. In general, I think that fixing all these small issues is valuable and should be done; however, people is reluctant given it makes cherry-picking much harder. Anyway, if we go with this, I would fix the remaining 3 checkstyle issues. > Server.java Prefer ArrayList > > > Key: HADOOP-15830 > URL: https://issues.apache.org/jira/browse/HADOOP-15830 > Project: Hadoop Common > Issue Type: Improvement > Components: ipc >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15830.2.patch, HDFS-13969.1.patch > > > * Prefer ArrayDeque over LinkedList (faster, less memory overhead) > * Address this code: > {code} > // > // Remove calls that have been pending in the responseQueue > // for a long time. > // > private void doPurge(RpcCall call, long now) { > LinkedList responseQueue = call.connection.responseQueue; > synchronized (responseQueue) { > Iterator iter = responseQueue.listIterator(0); > while (iter.hasNext()) { > call = iter.next(); > if (now > call.timestamp + PURGE_INTERVAL) { > closeConnection(call.connection); > break; > } > } > } > } > {code} > It says "Remove calls" (plural) but only one call will be removed because of > the 'break' statement. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout
[ https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643708#comment-16643708 ] Kitti Nanasi commented on HADOOP-11100: --- Thanks for working on this [~adam.antal]! You can test the added configuration in TestFTPFileSystem, like in the test TestFTPFileSystem#testFTPTransferMode. Do I understand correctly that the default timeout is 300 milliseconds? > Support to configure ftpClient.setControlKeepAliveTimeout > --- > > Key: HADOOP-11100 > URL: https://issues.apache.org/jira/browse/HADOOP-11100 > Project: Hadoop Common > Issue Type: Improvement > Components: fs >Affects Versions: 2.3.0 >Reporter: Krishnamoorthy Dharmalingam >Assignee: Adam Antal >Priority: Minor > Attachments: HADOOP-11100.002.patch, HDFS-11000.001.patch > > > In FTPFilesystem or Configuration, timeout is not possible to configure. > It is very straight forward to configure, in FTPFilesystem.connect() method. > ftpClient.setControlKeepAliveTimeout > Like > private FTPClient connect() throws IOException { > ... > String timeout = conf.get("fs.ftp.timeout." + host); > ... > ftpClient.setControlKeepAliveTimeout(new Integer(300)); > > } -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation
[ https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643698#comment-16643698 ] Hadoop QA commented on HADOOP-15616: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 24s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 6s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 5s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 27s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 27s{color} | {color:red} root generated 3 new + 1327 unchanged - 0 fixed = 1330 total (was 1327) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 21s{color} | {color:orange} root: The patch generated 238 new + 0 unchanged - 0 fixed = 238 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 26 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 7s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-cloud-storage-project {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s{color} | {color:green} hadoop-project in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 31s{color} | {color:green} hadoop-cos in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 32s{color} | {color:green} hadoop-cloud-storage-project in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 39s{color} | {color:red} The patch generated 3 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 98m 50s{color} | {color:black} {color} |
[jira] [Work started] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-15819 started by Gabor Bota. --- > S3A integration test failures: FileSystem is closed! - without parallel test > run > > > Key: HADOOP-15819 > URL: https://issues.apache.org/jira/browse/HADOOP-15819 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Attachments: S3ACloseEnforcedFileSystem.java, > closed_fs_closers_example_5klines.log.zip > > > Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against > Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these > failures: > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob) > Time elapsed: 0.027 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob > [ERROR] > testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.021 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.022 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest) > Time elapsed: 0.023 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob > [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob) > Time elapsed: 0.039 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory > [ERROR] > testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory) > Time elapsed: 0.014 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > {noformat} > The big issue is that the tests are running in a serial manner - no test is > running on top of the other - so we should not see that the tests are failing > like this. The issue could be in how we handle > org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same > S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B > test will get the same FileSystem object from the cache and try to use it, > but it is closed. > We see this a lot in our downstream testing too. It's not possible to tell > that the failed regression test result is an implementation issue in the > runtime code or a test implementation problem. > I've checked when and what closes the S3AFileSystem with a sightly modified > version of S3AFileSystem which logs the closers of the fs if an error should > occur. I'll attach this modified java file for reference. See the next > example of the result when it's running: > {noformat} > 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem > (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): > java.lang.RuntimeException: Using closed FS!. > at > org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73) > at > org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193) > at > org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40) > at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) > at >
[jira] [Assigned] (HADOOP-15819) S3A integration test failures: FileSystem is closed! - without parallel test run
[ https://issues.apache.org/jira/browse/HADOOP-15819?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota reassigned HADOOP-15819: --- Assignee: Gabor Bota > S3A integration test failures: FileSystem is closed! - without parallel test > run > > > Key: HADOOP-15819 > URL: https://issues.apache.org/jira/browse/HADOOP-15819 > Project: Hadoop Common > Issue Type: Bug > Components: fs/s3 >Affects Versions: 3.1.1 >Reporter: Gabor Bota >Assignee: Gabor Bota >Priority: Critical > Attachments: S3ACloseEnforcedFileSystem.java, > closed_fs_closers_example_5klines.log.zip > > > Running the integration tests for hadoop-aws {{mvn -Dscale verify}} against > Amazon AWS S3 (eu-west-1, us-west-1, with no s3guard) we see a lot of these > failures: > {noformat} > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.408 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITDirectoryCommitMRJob) > Time elapsed: 0.027 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 2, Failures: 0, Errors: 2, Skipped: 0, Time elapsed: 4.345 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob > [ERROR] > testStagingDirectory(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.021 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJob) > Time elapsed: 0.022 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.489 > s <<< FAILURE! - in > org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest > [ERROR] > testMRJob(org.apache.hadoop.fs.s3a.commit.staging.integration.ITStagingCommitMRJobBadDest) > Time elapsed: 0.023 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 4.695 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob > [ERROR] testMRJob(org.apache.hadoop.fs.s3a.commit.magic.ITMagicCommitMRJob) > Time elapsed: 0.039 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > [ERROR] Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.015 > s <<< FAILURE! - in org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory > [ERROR] > testEverything(org.apache.hadoop.fs.s3a.commit.ITestS3ACommitterFactory) > Time elapsed: 0.014 s <<< ERROR! > java.io.IOException: s3a://cloudera-dev-gabor-ireland: FileSystem is closed! > {noformat} > The big issue is that the tests are running in a serial manner - no test is > running on top of the other - so we should not see that the tests are failing > like this. The issue could be in how we handle > org.apache.hadoop.fs.FileSystem#CACHE - the tests should use the same > S3AFileSystem so if A test uses a FileSystem and closes it in teardown then B > test will get the same FileSystem object from the cache and try to use it, > but it is closed. > We see this a lot in our downstream testing too. It's not possible to tell > that the failed regression test result is an implementation issue in the > runtime code or a test implementation problem. > I've checked when and what closes the S3AFileSystem with a sightly modified > version of S3AFileSystem which logs the closers of the fs if an error should > occur. I'll attach this modified java file for reference. See the next > example of the result when it's running: > {noformat} > 2018-10-04 00:52:25,596 [Thread-4201] ERROR s3a.S3ACloseEnforcedFileSystem > (S3ACloseEnforcedFileSystem.java:checkIfClosed(74)) - Use after close(): > java.lang.RuntimeException: Using closed FS!. > at > org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.checkIfClosed(S3ACloseEnforcedFileSystem.java:73) > at > org.apache.hadoop.fs.s3a.S3ACloseEnforcedFileSystem.mkdirs(S3ACloseEnforcedFileSystem.java:474) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.mkdirs(AbstractFSContractTestBase.java:338) > at > org.apache.hadoop.fs.contract.AbstractFSContractTestBase.setup(AbstractFSContractTestBase.java:193) > at > org.apache.hadoop.fs.s3a.ITestS3AClosedFS.setup(ITestS3AClosedFS.java:40) > at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) > at >
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643635#comment-16643635 ] Steve Loughran commented on HADOOP-15833: - Also a stack trace during table delete cleanup (i.e. table is trying to be deleted, but the delete is rejected because the table is being deleted) {code} [ERROR] testDynamoDBInitDestroyCycle(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 146.059 s <<< ERROR! com.amazonaws.services.dynamodbv2.model.ResourceInUseException: Attempt to change a resource which is still in use: Table is being deleted: testDynamoDBInitDestroy-2025089569 (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceInUseException; Request ID: E5COQT83DJMPNH9G9NTK2PVKJJVV4KQNSO5AEMVJF66Q9ASUAAJG) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.doInvoke(AmazonDynamoDBClient.java:3443) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.invoke(AmazonDynamoDBClient.java:3419) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.executeDeleteTable(AmazonDynamoDBClient.java:1218) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.deleteTable(AmazonDynamoDBClient.java:1193) at com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient.deleteTable(AmazonDynamoDBClient.java:1230) at com.amazonaws.services.dynamodbv2.document.Table.delete(Table.java:587) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB.testDynamoDBInitDestroyCycle(ITestS3GuardToolDynamoDB.java:332) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
[ https://issues.apache.org/jira/browse/HADOOP-15834?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643618#comment-16643618 ] Steve Loughran commented on HADOOP-15834: - Also, switching to dynamic capacity seems to trigger periods when the DDB table isn't active any more {code} ERROR] Tests run: 68, Failures: 0, Errors: 1, Skipped: 4, Time elapsed: 429.837 s <<< FAILURE! - in org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations [ERROR] testCreateFlagAppendNonExistingFile(org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations) Time elapsed: 127.843 s <<< ERROR! java.lang.RuntimeException: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state. at org.apache.hadoop.fs.FileContext.getFileContext(FileContext.java:464) at org.apache.hadoop.fs.s3a.S3ATestUtils.createTestFileContext(S3ATestUtils.java:218) at org.apache.hadoop.fs.s3a.fileContext.ITestS3AFileContextMainOperations.setUp(ITestS3AFileContextMainOperations.java:33) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365) at org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273) at org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238) at org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159) at org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:379) at org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:340) at org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:125) at org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:413) Caused by: java.io.IOException: Failed to instantiate metadata store org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore defined in fs.s3a.metadatastore.impl: java.lang.IllegalArgumentException: Table hwdev-steve-ireland-new did not transition into ACTIVE state. at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:114) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:378) at org.apache.hadoop.fs.DelegateToFileSystem.(DelegateToFileSystem.java:52) at org.apache.hadoop.fs.s3a.S3A.(S3A.java:40) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.fs.AbstractFileSystem.newInstance(AbstractFileSystem.java:135) at org.apache.hadoop.fs.AbstractFileSystem.createFileSystem(AbstractFileSystem.java:173) at org.apache.hadoop.fs.AbstractFileSystem.get(AbstractFileSystem.java:258) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:336) at org.apache.hadoop.fs.FileContext$2.run(FileContext.java:333) at java.security.AccessController.doPrivileged(Native Method) at
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643579#comment-16643579 ] Steve Loughran commented on HADOOP-15833: - note: maybe we are over-aggressive on retry timeouts on batch write (i.e. 9 failures allowed on a write of 25 items). Even if the batch write is making progress every call, timeouts will still occur. Provided batch ops makes progress, don't count it as a failure: HADOOP-15834 > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-15834) Improve throttling on S3Guard DDB batch retries
Steve Loughran created HADOOP-15834: --- Summary: Improve throttling on S3Guard DDB batch retries Key: HADOOP-15834 URL: https://issues.apache.org/jira/browse/HADOOP-15834 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.2.0 Reporter: Steve Loughran the batch throttling may fail too fast if there's batch update of 25 writes but the default retry count is nine attempts, only nine writes of the batch may be attempted...even if each attempt is actually successfully writing data. In contrast, a single write of a piece of data gets the same no. of attempts, so 25 individual writes can handle a lot more throttling than a bulk write. Proposed: retry logic to be more forgiving of batch writes, such as not consider a batch call where at least one data item was written to count as a failure -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15828: - Status: Patch Available (was: Open) Fixed the unit test to more closely match the return value of the JDK. (JDK does not return a null value for the affected test, but the test did). > Review of MachineList class > --- > > Key: HADOOP-15828 > URL: https://issues.apache.org/jira/browse/HADOOP-15828 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, > HADOOP-15828.3.patch > > > Clean up and simplify class {{MachineList}}. Primarily, remove LinkedList > implementation and use empty collections instead of 'null' values, add > logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15828: - Attachment: HADOOP-1528.3.patch > Review of MachineList class > --- > > Key: HADOOP-15828 > URL: https://issues.apache.org/jira/browse/HADOOP-15828 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, > HADOOP-15828.3.patch > > > Clean up and simplify class {{MachineList}}. Primarily, remove LinkedList > implementation and use empty collections instead of 'null' values, add > logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15828: - Attachment: (was: HADOOP-1528.3.patch) > Review of MachineList class > --- > > Key: HADOOP-15828 > URL: https://issues.apache.org/jira/browse/HADOOP-15828 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, > HADOOP-15828.3.patch > > > Clean up and simplify class {{MachineList}}. Primarily, remove LinkedList > implementation and use empty collections instead of 'null' values, add > logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15828: - Attachment: HADOOP-15828.3.patch > Review of MachineList class > --- > > Key: HADOOP-15828 > URL: https://issues.apache.org/jira/browse/HADOOP-15828 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch, > HADOOP-15828.3.patch > > > Clean up and simplify class {{MachineList}}. Primarily, remove LinkedList > implementation and use empty collections instead of 'null' values, add > logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15828) Review of MachineList class
[ https://issues.apache.org/jira/browse/HADOOP-15828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] BELUGA BEHR updated HADOOP-15828: - Status: Open (was: Patch Available) > Review of MachineList class > --- > > Key: HADOOP-15828 > URL: https://issues.apache.org/jira/browse/HADOOP-15828 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Minor > Attachments: HADOOP-15828.1.patch, HADOOP-15828.2.patch > > > Clean up and simplify class {{MachineList}}. Primarily, remove LinkedList > implementation and use empty collections instead of 'null' values, add > logging. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643523#comment-16643523 ] Steve Loughran commented on HADOOP-15833: - also saw in an updated test run: {code} [ERROR] testConcurrentTableCreations(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps) Time elapsed: 148.273 s <<< ERROR! java.lang.IllegalArgumentException: Table testConcurrentTableCreations-1102232307 is not deleted. at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:505) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.deleteTable(ITestS3GuardConcurrentOps.java:77) at org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardConcurrentOps.testConcurrentTableCreations(ITestS3GuardConcurrentOps.java:166) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.waiters.WaiterTimedOutException: Reached maximum attempts without transitioning to the desired state at com.amazonaws.waiters.WaiterExecution.pollResource(WaiterExecution.java:86) at com.amazonaws.waiters.WaiterImpl.run(WaiterImpl.java:88) at com.amazonaws.services.dynamodbv2.document.Table.waitForDelete(Table.java:502) ... 13 more {code} maybe: you get table ops throttled too > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643515#comment-16643515 ] Steve Loughran commented on HADOOP-15833: - I'm attaching a screenshot of the metrics; it was a fixed 10 read/10 write DDB table (now made dynamic). this is for the curious out there: what the ddb load on a long-haul parallel test run is > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-15833: Attachment: Screen Shot 2018-10-09 at 15.33.35.png > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Attachments: Screen Shot 2018-10-09 at 15.33.35.png > > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643509#comment-16643509 ] Steve Loughran commented on HADOOP-15833: - increasing test timeout causes the prune command to fail with too many attempts {code} [ERROR] testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 193.269 s <<< ERROR! org.apache.hadoop.fs.s3a.AWSServiceThrottledException: Max retries during batch write exceeded (9) for DynamoDB. This may be because the write threshold of DynamoDB is set too low.: Throttling (Service: S3Guard; Status Code: 503; Error Code: Throttling; Request ID: n/a) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:806) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:765) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:1030) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:993) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommand(AbstractS3GuardToolTestBase.java:271) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommandCLI(AbstractS3GuardToolTestBase.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.AmazonServiceException: Throttling (Service: S3Guard; Status Code: 503; Error Code: Throttling; Request ID: n/a) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:797) {code} > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-11100) Support to configure ftpClient.setControlKeepAliveTimeout
[ https://issues.apache.org/jira/browse/HADOOP-11100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643441#comment-16643441 ] Hadoop QA commented on HADOOP-11100: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 37s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 15s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 22m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 12s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 28s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 55s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:4b8c2b1 | | JIRA Issue | HADOOP-11100 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12942999/HADOOP-11100.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b2329bb75cc8 3.13.0-153-generic #203-Ubuntu SMP Thu Jun 14 08:52:28 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 7ba1cfd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_181 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/15321/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/15321/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Support to configure
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643419#comment-16643419 ] Steve Loughran commented on HADOOP-15833: - and {{ITestS3GuardToolDynamoDB.testSetCapacityFailFastIfNotGuarded()}} needs to clear bucket-specific setting before running the s3guard command, so s3guard isn't starting. I don't understand why this only happens sometimes though, given the per-bucket settings are set in auth-keys.xml and the same for serial as parallel {code} [ERROR] testSetCapacityFailFastIfNotGuarded(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 7.164 s <<< ERROR! java.io.FileNotFoundException: DynamoDB table '3dd402c7-ff1b-4906-91ab-7831acd7d81f' does not exist in region eu-west-1; auto-creation is turned off at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initTable(DynamoDBMetadataStore.java:1213) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.initialize(DynamoDBMetadataStore.java:356) at org.apache.hadoop.fs.s3a.s3guard.S3Guard.getMetadataStore(S3Guard.java:99) at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:378) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3354) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:124) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3403) at org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:3377) at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:530) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$SetCapacity.run(S3GuardTool.java:513) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:353) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.fs.s3a.s3guard.S3GuardTool.run(S3GuardTool.java:1552) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.run(AbstractS3GuardToolTestBase.java:115) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.lambda$testSetCapacityFailFastIfNotGuarded$2(AbstractS3GuardToolTestBase.java:331) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:494) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:380) at org.apache.hadoop.test.LambdaTestUtils.intercept(LambdaTestUtils.java:449) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testSetCapacityFailFastIfNotGuarded(AbstractS3GuardToolTestBase.java:330) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) Caused by: com.amazonaws.services.dynamodbv2.model.ResourceNotFoundException: Requested resource not found: Table: 3dd402c7-ff1b-4906-91ab-7831acd7d81f not found (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: ResourceNotFoundException; Request ID: 7FOARITHKRAUA6MRCM50NLHKN3VV4KQNSO5AEMVJF66Q9ASUAAJG) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1640) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1304) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1058) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:743) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:717) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:699) at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:667) at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:649) at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:513) at
[jira] [Commented] (HADOOP-15822) zstd compressor can fail with a small output buffer
[ https://issues.apache.org/jira/browse/HADOOP-15822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643416#comment-16643416 ] Peter Bacsko commented on HADOOP-15822: --- [~jlowe] you were right, it's not related to zstandard. I reproduced this with other codecs + no compression. It's possibly an edge case. > zstd compressor can fail with a small output buffer > --- > > Key: HADOOP-15822 > URL: https://issues.apache.org/jira/browse/HADOOP-15822 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.9.0, 3.0.0 >Reporter: Jason Lowe >Assignee: Jason Lowe >Priority: Major > Attachments: HADOOP-15822.001.patch, HADOOP-15822.002.patch > > > TestZStandardCompressorDecompressor fails a couple of tests on my machine > with the latest zstd library (1.3.5). Compression can fail to successfully > finalize the stream when a small output buffer is used resulting in a failed > to init error, and decompression with a direct buffer can fail with an > invalid src size error. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15833) ITestS3GuardToolDynamoDB fails intermittently in parallel runs
[ https://issues.apache.org/jira/browse/HADOOP-15833?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16643417#comment-16643417 ] Steve Loughran commented on HADOOP-15833: - {testPruneCommandCLI}} timing out after 60s on parallel runs (I keep them small to force failures here, looks like the retry stuff is working so well tests time out). If there's a large amount of data to prune, maybe it's taking too long. Proposal: increase test timeout to scale timeout. (I know, I could just increase my ddb size, but I want to make sure it will eventually complete here even on retries) Ine {code} [ERROR] testPruneCommandCLI(org.apache.hadoop.fs.s3a.s3guard.ITestS3GuardToolDynamoDB) Time elapsed: 600.03 s <<< ERROR! java.lang.Exception: test timed out after 60 milliseconds at java.lang.Thread.sleep(Native Method) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.retryBackoffOnBatchWrite(DynamoDBMetadataStore.java:813) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.processBatchWriteRequest(DynamoDBMetadataStore.java:765) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.innerPut(DynamoDBMetadataStore.java:851) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.removeAuthoritativeDirFlag(DynamoDBMetadataStore.java:1080) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:1033) at org.apache.hadoop.fs.s3a.s3guard.DynamoDBMetadataStore.prune(DynamoDBMetadataStore.java:993) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommand(AbstractS3GuardToolTestBase.java:271) at org.apache.hadoop.fs.s3a.s3guard.AbstractS3GuardToolTestBase.testPruneCommandCLI(AbstractS3GuardToolTestBase.java:286) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} > ITestS3GuardToolDynamoDB fails intermittently in parallel runs > -- > > Key: HADOOP-15833 > URL: https://issues.apache.org/jira/browse/HADOOP-15833 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > > intermittent failure of a pair of {{ITestS3GuardToolDynamoDB}} tests in > parallel runs. They don't seem to fail in sequential mode. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org