[GitHub] [hadoop] hadoop-yetus commented on issue #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
hadoop-yetus commented on issue #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#issuecomment-551441134 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 80 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1210 | trunk passed | | +1 | compile | 32 | trunk passed | | +1 | checkstyle | 20 | trunk passed | | +1 | mvnsite | 39 | trunk passed | | +1 | shadedclient | 950 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 51 | trunk passed | | 0 | spotbugs | 69 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 68 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 29 | the patch passed | | +1 | compile | 27 | the patch passed | | +1 | javac | 27 | the patch passed | | +1 | checkstyle | 16 | the patch passed | | +1 | mvnsite | 29 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 898 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 47 | the patch passed | | +1 | findbugs | 71 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 478 | hadoop-hdfs-rbf in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 4200 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.metrics.TestMetricsBase | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1699 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fac68eba793b 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/4/testReport/ | | Max. process+thread count | 2415 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#discussion_r344054864 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java ## @@ -25,20 +35,10 @@ import org.apache.hadoop.test.GenericTestUtils; import org.junit.After; import org.junit.Before; -import org.junit.Test; import org.junit.Rule; +import org.junit.Test; import org.junit.rules.ExpectedException; -import java.io.IOException; -import java.util.Map; -import java.util.concurrent.ArrayBlockingQueue; -import java.util.concurrent.BlockingQueue; - -import static org.junit.Assert.assertEquals; -import static org.junit.Assert.assertTrue; -import static org.junit.Assert.fail; -import static org.junit.Assert.assertNotNull; - Review comment: Avoid Unnecessarily changing the imports order. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#discussion_r344055699 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java ## @@ -301,4 +301,16 @@ public void testConfigureConnectionActiveRatio() throws IOException { tmpConnManager.close(); } + + @Test + public void testUnsupportedProtoExceptionMsg() throws IOException { +exceptionRule.expect(IllegalStateException.class); +exceptionRule +.expectMessage("Unsupported protocol for connection to NameNode: " ++ UnsupportedProto.class.getName()); +ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, +UnsupportedProto.class); + } + + interface UnsupportedProto { } Review comment: Instead Can you LambdaTestUtils, Like this : ` @Test public void testUnsupportedProtoExceptionMsg() throws Exception { LambdaTestUtils.intercept(IllegalStateException.class, "Unsupported protocol for connection to NameNode: " + TestConnectionManager.class.getName(), () -> ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, TestConnectionManager.class)); }` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#discussion_r344055699 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java ## @@ -301,4 +301,16 @@ public void testConfigureConnectionActiveRatio() throws IOException { tmpConnManager.close(); } + + @Test + public void testUnsupportedProtoExceptionMsg() throws IOException { +exceptionRule.expect(IllegalStateException.class); +exceptionRule +.expectMessage("Unsupported protocol for connection to NameNode: " ++ UnsupportedProto.class.getName()); +ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, +UnsupportedProto.class); + } + + interface UnsupportedProto { } Review comment: Instead Can you LambdaTestUtils, Like this : public` void testUnsupportedProtoExceptionMsg() throws Exception { LambdaTestUtils.intercept(IllegalStateException.class, "Unsupported protocol for connection to NameNode: " + TestConnectionManager.class.getName(), () -> ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, TestConnectionManager.class)); } This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1700: HDFS-14963. Add HDFS Client machine caching active namenode index mechanism.
hadoop-yetus commented on issue #1700: HDFS-14963. Add HDFS Client machine caching active namenode index mechanism. URL: https://github.com/apache/hadoop/pull/1700#issuecomment-551453634 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 40 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 1101 | trunk passed | | +1 | compile | 199 | trunk passed | | +1 | checkstyle | 55 | trunk passed | | +1 | mvnsite | 126 | trunk passed | | +1 | shadedclient | 930 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 121 | trunk passed | | 0 | spotbugs | 170 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 301 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 14 | Maven dependency ordering for patch | | +1 | mvninstall | 110 | the patch passed | | +1 | compile | 193 | the patch passed | | +1 | javac | 193 | the patch passed | | +1 | checkstyle | 50 | hadoop-hdfs-project: The patch generated 0 new + 29 unchanged - 1 fixed = 29 total (was 30) | | +1 | mvnsite | 111 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 774 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 113 | the patch passed | | +1 | findbugs | 310 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 127 | hadoop-hdfs-client in the patch passed. | | -1 | unit | 5216 | hadoop-hdfs in the patch failed. | | +1 | asflicense | 45 | The patch does not generate ASF License warnings. | | | | 9996 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeErasureCodingMetrics | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1700/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1700 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 3d872a9f2744 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1700/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1700/5/testReport/ | | Max. process+thread count | 4219 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-client hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1700/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] Cosss7 commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
Cosss7 commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#discussion_r344097267 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java ## @@ -301,4 +301,16 @@ public void testConfigureConnectionActiveRatio() throws IOException { tmpConnManager.close(); } + + @Test + public void testUnsupportedProtoExceptionMsg() throws IOException { +exceptionRule.expect(IllegalStateException.class); +exceptionRule +.expectMessage("Unsupported protocol for connection to NameNode: " ++ UnsupportedProto.class.getName()); +ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, +UnsupportedProto.class); + } + + interface UnsupportedProto { } Review comment: Got it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists
[ https://issues.apache.org/jira/browse/HADOOP-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970014#comment-16970014 ] Ayush Saxena commented on HADOOP-16688: --- Thanx [~arp] for putting this up. Raised PR. Pls help review!!! > Update Hadoop website to mention Ozone mailing lists > > > Key: HADOOP-16688 > URL: https://issues.apache.org/jira/browse/HADOOP-16688 > Project: Hadoop Common > Issue Type: Improvement > Components: website >Reporter: Arpit Agarwal >Priority: Major > > Now that Ozone has its separate mailing lists, let's list them on the Hadoop > website. > https://hadoop.apache.org/mailing_lists.html > Thanks to [~ayushtkn] for suggesting this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16676) Security Vulnerability for dependency jetty-xml (Backport HADOOP-16152 to branch-3.2)
[ https://issues.apache.org/jira/browse/HADOOP-16676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970039#comment-16970039 ] Siyao Meng commented on HADOOP-16676: - {{TestDiskBalancer.testDiskBalancerWithFedClusterWithOneNameServiceEmpty}} is failing before the patch. Seems unrelated. The rest passed locally. > Security Vulnerability for dependency jetty-xml (Backport HADOOP-16152 to > branch-3.2) > - > > Key: HADOOP-16676 > URL: https://issues.apache.org/jira/browse/HADOOP-16676 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.2.1 >Reporter: DW >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16676.branch-3.2.001.patch, > HADOOP-16676.branch-3.2.001.patch, HADOOP-16676.branch-3.2.002.patch > > > Hello, > > org.apache.hadoop:hadoop-common define the dependency to jetty-webapp and > jetty-xml in version v9.3.24 with known CVE-2017-9735. Please can you upgrade > to version 9.4.7 or higher? > +--- org.apache.hadoop:hadoop-client:3.2.1 > | +--- org.apache.hadoop:hadoop-common:3.2.1 > | +--- org.eclipse.jetty:jetty-webapp:9.3.24.v20180605 > | | | +--- org.eclipse.jetty:jetty-xml:9.3.24.v20180605 > | | | \--- org.eclipse.jetty:jetty-servlet:9.3.24.v20180605 (*) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
ayushtkn commented on a change in pull request #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#discussion_r344110373 ## File path: hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestConnectionManager.java ## @@ -301,4 +302,15 @@ public void testConfigureConnectionActiveRatio() throws IOException { tmpConnManager.close(); } + + @Test + public void testUnsupportedProtoExceptionMsg() throws Exception { +LambdaTestUtils.intercept(IllegalStateException.class, +"Unsupported protocol for connection to NameNode: " ++ TestConnectionManager.class.getName(), +() -> ConnectionPool.newConnection(conf, TEST_NN_ADDRESS, TEST_USER1, +TestConnectionManager.class)); + } + + interface UnsupportedProto { } Review comment: This line now is not required interface UnsupportedProto { } we can remove it. Apart LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class
hadoop-yetus commented on issue #1699: HDFS-14962. RBF: ConnectionPool#newConnection() error log wrong protocol class URL: https://github.com/apache/hadoop/pull/1699#issuecomment-551611344 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 81 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1214 | trunk passed | | +1 | compile | 30 | trunk passed | | +1 | checkstyle | 20 | trunk passed | | +1 | mvnsite | 36 | trunk passed | | +1 | shadedclient | 871 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 48 | trunk passed | | 0 | spotbugs | 67 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 66 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 30 | the patch passed | | +1 | compile | 25 | the patch passed | | +1 | javac | 25 | the patch passed | | +1 | checkstyle | 15 | the patch passed | | +1 | mvnsite | 29 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 888 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 45 | the patch passed | | +1 | findbugs | 69 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 474 | hadoop-hdfs-rbf in the patch failed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 4104 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant | | | hadoop.hdfs.server.federation.metrics.TestMetricsBase | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/5/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1699 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 637bd28ffb25 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/5/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/5/testReport/ | | Max. process+thread count | 2587 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1699/5/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344138955 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -47,13 +53,29 @@ public void initialize() {} * @return An instance of the appropriate CopyFilter */ public static CopyFilter getCopyFilter(Configuration conf) { +String filtersClassName = conf.get(DistCpConstants.CONF_LABEL_FILTERS_CLASS); + +if (filtersClassName != null) { + try { +Class filtersClass = conf.getClassByName(filtersClassName).asSubclass(CopyFilter.class); +filtersClassName = filtersClass.getName(); +Constructor constructor = filtersClass.getDeclaredConstructor(Configuration.class); +return constructor.newInstance(conf); + } catch (Exception e) { +LOG.error("Unable to instantiate " + filtersClassName, e); + } +} +return getDefaultCopyFilter(conf); + } + + private static CopyFilter getDefaultCopyFilter(Configuration conf) { String filtersFilename = conf.get(DistCpConstants.CONF_LABEL_FILTERS_FILE); if (filtersFilename == null) { return new TrueCopyFilter(); } else { String filterFilename = conf.get( - DistCpConstants.CONF_LABEL_FILTERS_FILE); + DistCpConstants.CONF_LABEL_FILTERS_FILE); Review comment: revert This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344138955 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -47,13 +53,29 @@ public void initialize() {} * @return An instance of the appropriate CopyFilter */ public static CopyFilter getCopyFilter(Configuration conf) { +String filtersClassName = conf.get(DistCpConstants.CONF_LABEL_FILTERS_CLASS); + +if (filtersClassName != null) { + try { +Class filtersClass = conf.getClassByName(filtersClassName).asSubclass(CopyFilter.class); +filtersClassName = filtersClass.getName(); +Constructor constructor = filtersClass.getDeclaredConstructor(Configuration.class); +return constructor.newInstance(conf); + } catch (Exception e) { +LOG.error("Unable to instantiate " + filtersClassName, e); + } +} +return getDefaultCopyFilter(conf); + } + + private static CopyFilter getDefaultCopyFilter(Configuration conf) { String filtersFilename = conf.get(DistCpConstants.CONF_LABEL_FILTERS_FILE); if (filtersFilename == null) { return new TrueCopyFilter(); } else { String filterFilename = conf.get( - DistCpConstants.CONF_LABEL_FILTERS_FILE); + DistCpConstants.CONF_LABEL_FILTERS_FILE); Review comment: revert This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344141992 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link CopyFilter}. + */ +public class TestCopyFilter { Review comment: If you've never noticed before, I'm very strict about tests. My requirements are 1. tests must be designed to break the code rather than demonstrate its correctness in controlled circumstances. 1. There must be enough information from a failed Jenkins test run to begin debugging what just went wrong. That means: all asserts must have meaningful text with them; stack traces must not be swallowed. Add tests for * non-existent class. * class of wrong type. * empty string This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344140713 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpConstants.java ## @@ -120,6 +120,13 @@ private DistCpConstants() { /* DistCp CopyListing class override param */ public static final String CONF_LABEL_COPY_LISTING_CLASS = "distcp.copy.listing.class"; + + /* DistCp Filter class override param */ Review comment: Add a "." at the end to keep javadoc happy. And yes, these should be javadocs. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344142319 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link CopyFilter}. + */ +public class TestCopyFilter { + +/** + * Test {@link CopyFilter#getCopyFilter(Configuration)} + */ +@Test +public void testGetCopyFilter() { Review comment: Make each test a separate method. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344147214 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -47,13 +53,29 @@ public void initialize() {} * @return An instance of the appropriate CopyFilter */ public static CopyFilter getCopyFilter(Configuration conf) { +String filtersClassName = conf.get(DistCpConstants.CONF_LABEL_FILTERS_CLASS); + +if (filtersClassName != null) { + try { +Class filtersClass = conf.getClassByName(filtersClassName).asSubclass(CopyFilter.class); +filtersClassName = filtersClass.getName(); +Constructor constructor = filtersClass.getDeclaredConstructor(Configuration.class); +return constructor.newInstance(conf); + } catch (Exception e) { +LOG.error("Unable to instantiate " + filtersClassName, e); + } +} +return getDefaultCopyFilter(conf); Review comment: after fixing the class load to propagate the exception, make this an `else {}` clause This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344145224 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DefaultCopyFilter.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; + +/** + * Implementation of default filter for DistCp. + * {@link DistCpConstants#CONF_LABEL_FILTERS_CLASS} needs to be set in {@link Configuration} + * when launching a distcp job. + */ +public class DefaultCopyFilter extends CopyFilter { + +/** + * Regex which can used to filter source files. + * {@link DistCpConstants#DISTCP_EXCLUDE_FILE_REGEX} can be set in {@link Configuration} when + * launching a DistCp job. If not set no files will be filtered. + */ +private String excludeFileRegex; + +private List filters = new ArrayList<>(); + +protected DefaultCopyFilter(Configuration conf) { +excludeFileRegex = conf.get(DistCpConstants.DISTCP_EXCLUDE_FILE_REGEX); +if (excludeFileRegex != null) { +Pattern pattern = Pattern.compile(excludeFileRegex); +filters.add(pattern); +} +} + +@Override +public boolean shouldCopy(Path path) { +for (Pattern filter : filters) { +if (filter.matcher(path.toString()).matches()) { +LOG.debug("Skipping " + path.toString() + " as it matches the filter regex"); Review comment: sl4fj inline {} expansion, This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344142549 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link CopyFilter}. + */ +public class TestCopyFilter { + +/** + * Test {@link CopyFilter#getCopyFilter(Configuration)} + */ +@Test +public void testGetCopyFilter() { +Configuration configuration = new Configuration(false); Review comment: Add messages; include the class which fails. Same for the others. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344142655 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDefaultCopyFilter.java ## @@ -0,0 +1,46 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.junit.Assert; Review comment: nit: import ordering This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344147755 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDefaultCopyFilter.java ## @@ -0,0 +1,46 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link DefaultCopyFilter} + */ +public class TestDefaultCopyFilter { + +@Test +public void testShouldCopy() { +Configuration configuration = new Configuration(false); +configuration.set("distcp.exclude-file-regex", Review comment: use a reference to the constant This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344141005 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; Review comment: stick in their own block above the org.apache one, below java.* imports This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344142913 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestDefaultCopyFilter.java ## @@ -0,0 +1,46 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link DefaultCopyFilter} + */ +public class TestDefaultCopyFilter { + +@Test +public void testShouldCopy() { +Configuration configuration = new Configuration(false); +configuration.set("distcp.exclude-file-regex", + "\\/.*_COPYING_$|\\/.*_COPYING$|^.*\\/\\.[^\\/]*$|\\/_temporary$|\\/\\_temporary\\/|.*\\/\\.Trash\\/.*"); +DefaultCopyFilter defaultCopyFilter = new DefaultCopyFilter(configuration); +Path shouldCopyPath = new Path("/user/bar"); +Assert.assertTrue(defaultCopyFilter.shouldCopy(shouldCopyPath)); Review comment: messages to include path and explanation of why the assert failed. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344146123 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DefaultCopyFilter.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; + +/** + * Implementation of default filter for DistCp. + * {@link DistCpConstants#CONF_LABEL_FILTERS_CLASS} needs to be set in {@link Configuration} + * when launching a distcp job. + */ +public class DefaultCopyFilter extends CopyFilter { Review comment: its a regexp filter, and the name must reflect this, somehow. Maybe "RegexpInConfigurationFilter" This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344147693 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link CopyFilter}. + */ +public class TestCopyFilter { + +/** + * Test {@link CopyFilter#getCopyFilter(Configuration)} + */ +@Test +public void testGetCopyFilter() { +Configuration configuration = new Configuration(false); +CopyFilter copyFilter = CopyFilter.getCopyFilter(configuration); +Assert.assertTrue(copyFilter instanceof TrueCopyFilter); + +configuration.set("distcp.filters.file", "random"); Review comment: use a reference to the constant This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344140201 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DefaultCopyFilter.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; + +/** + * Implementation of default filter for DistCp. + * {@link DistCpConstants#CONF_LABEL_FILTERS_CLASS} needs to be set in {@link Configuration} + * when launching a distcp job. + */ +public class DefaultCopyFilter extends CopyFilter { + +/** + * Regex which can used to filter source files. + * {@link DistCpConstants#DISTCP_EXCLUDE_FILE_REGEX} can be set in {@link Configuration} when + * launching a DistCp job. If not set no files will be filtered. + */ +private String excludeFileRegex; Review comment: nit: use two space indents This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344145023 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -17,15 +17,21 @@ */ package org.apache.hadoop.tools; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import java.lang.reflect.Constructor; + /** * Interface for excluding files from DistCp. * */ public abstract class CopyFilter { + static final Log LOG = LogFactory.getLog(CopyFilter.class); Review comment: private and use SLF4J This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344140842 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DefaultCopyFilter.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; + +import java.util.ArrayList; +import java.util.List; +import java.util.regex.Pattern; + +/** + * Implementation of default filter for DistCp. + * {@link DistCpConstants#CONF_LABEL_FILTERS_CLASS} needs to be set in {@link Configuration} + * when launching a distcp job. + */ +public class DefaultCopyFilter extends CopyFilter { + +/** + * Regex which can used to filter source files. + * {@link DistCpConstants#DISTCP_EXCLUDE_FILE_REGEX} can be set in {@link Configuration} when + * launching a DistCp job. If not set no files will be filtered. + */ +private String excludeFileRegex; + +private List filters = new ArrayList<>(); + +protected DefaultCopyFilter(Configuration conf) { +excludeFileRegex = conf.get(DistCpConstants.DISTCP_EXCLUDE_FILE_REGEX); +if (excludeFileRegex != null) { Review comment: use getTrimmed(option, "") and then say if !isEmpty(), so that "" is treated as no-regexp This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344147606 ## File path: hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/TestCopyFilter.java ## @@ -0,0 +1,47 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import org.apache.hadoop.conf.Configuration; +import org.junit.Assert; +import org.junit.Test; + +/** + * Test {@link CopyFilter}. + */ +public class TestCopyFilter { + +/** + * Test {@link CopyFilter#getCopyFilter(Configuration)} + */ +@Test +public void testGetCopyFilter() { +Configuration configuration = new Configuration(false); +CopyFilter copyFilter = CopyFilter.getCopyFilter(configuration); +Assert.assertTrue(copyFilter instanceof TrueCopyFilter); + +configuration.set("distcp.filters.file", "random"); +copyFilter = CopyFilter.getCopyFilter(configuration); +Assert.assertTrue(copyFilter instanceof RegexCopyFilter); + +configuration.set("distcp.filters.class", "org.apache.hadoop.tools.DefaultCopyFilter"); Review comment: use references to constants for ease of maintenance This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344144110 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -47,13 +53,29 @@ public void initialize() {} * @return An instance of the appropriate CopyFilter */ public static CopyFilter getCopyFilter(Configuration conf) { +String filtersClassName = conf.get(DistCpConstants.CONF_LABEL_FILTERS_CLASS); + +if (filtersClassName != null) { + try { +Class filtersClass = conf.getClassByName(filtersClassName).asSubclass(CopyFilter.class); +filtersClassName = filtersClass.getName(); +Constructor constructor = filtersClass.getDeclaredConstructor(Configuration.class); +return constructor.newInstance(conf); + } catch (Exception e) { +LOG.error("Unable to instantiate " + filtersClassName, e); + } +} +return getDefaultCopyFilter(conf); + } + + private static CopyFilter getDefaultCopyFilter(Configuration conf) { String filtersFilename = conf.get(DistCpConstants.CONF_LABEL_FILTERS_FILE); if (filtersFilename == null) { return new TrueCopyFilter(); } else { String filterFilename = conf.get( - DistCpConstants.CONF_LABEL_FILTERS_FILE); + DistCpConstants.CONF_LABEL_FILTERS_FILE); Review comment: nit: revert This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344139228 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -17,15 +17,21 @@ */ package org.apache.hadoop.tools; +import org.apache.commons.logging.Log; +import org.apache.commons.logging.LogFactory; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; +import java.lang.reflect.Constructor; Review comment: java.* imports should go above the org.apache ones in their own block This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344139911 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DefaultCopyFilter.java ## @@ -0,0 +1,63 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package org.apache.hadoop.tools; + +import com.google.common.annotations.VisibleForTesting; +import org.apache.hadoop.conf.Configuration; +import org.apache.hadoop.fs.Path; + +import java.util.ArrayList; Review comment: java.* imports should go above the org.apache ones in their own block This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in…
steveloughran commented on a change in pull request #1702: HDFS-14788 Use dynamic regex filter to ignore copy of source files in… URL: https://github.com/apache/hadoop/pull/1702#discussion_r344139802 ## File path: hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/CopyFilter.java ## @@ -47,13 +53,29 @@ public void initialize() {} * @return An instance of the appropriate CopyFilter */ public static CopyFilter getCopyFilter(Configuration conf) { +String filtersClassName = conf.get(DistCpConstants.CONF_LABEL_FILTERS_CLASS); Review comment: 1. Use Configuration.getClass() 2. Treat a failure to instantiate as a disaster not an exception to swallow This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16693) Review InterruptedException Handling
[ https://issues.apache.org/jira/browse/HADOOP-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970124#comment-16970124 ] Steve Loughran commented on HADOOP-16693: - can you do this as a github pr. thanks > Review InterruptedException Handling > > > Key: HADOOP-16693 > URL: https://issues.apache.org/jira/browse/HADOOP-16693 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16693.1.patch > > > Difficult to do well. I hopefully improved it some. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16696) Adding an option to Always use Read Ahead, even for non sequential reads
[ https://issues.apache.org/jira/browse/HADOOP-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970127#comment-16970127 ] Steve Loughran commented on HADOOP-16696: - can you submit as github PR. thanks > Adding an option to Always use Read Ahead, even for non sequential reads > > > Key: HADOOP-16696 > URL: https://issues.apache.org/jira/browse/HADOOP-16696 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Saurabh >Priority: Major > Attachments: patch1.diff > > > Adding a config fs.azure.always.readahead, which is disabled by default, to > allow read ahead in case of non-sequential reads, such as when reading > parquet file in spark. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16694: Attachment: HADOOP-16694.2.patch > Use Objects requireNull Where Appropriate > - > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16694: Status: Open (was: Patch Available) > Use Objects requireNull Where Appropriate > - > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16694: Status: Patch Available (was: Open) > Use Objects requireNull Where Appropriate > - > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16693) Review InterruptedException Handling
[ https://issues.apache.org/jira/browse/HADOOP-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970152#comment-16970152 ] David Mollitor commented on HADOOP-16693: - [~ste...@apache.org] Done! Thanks > Review InterruptedException Handling > > > Key: HADOOP-16693 > URL: https://issues.apache.org/jira/browse/HADOOP-16693 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16693.1.patch > > > Difficult to do well. I hopefully improved it some. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] belugabehr opened a new pull request #1705: HADOOP-16693: Review InterruptedException Handling
belugabehr opened a new pull request #1705: HADOOP-16693: Review InterruptedException Handling URL: https://github.com/apache/hadoop/pull/1705 ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1671: HADOOP-16665. Filesystems to be closed if they failed during initialize().
steveloughran commented on issue #1671: HADOOP-16665. Filesystems to be closed if they failed during initialize(). URL: https://github.com/apache/hadoop/pull/1671#issuecomment-551790309 Rebased This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists
steveloughran commented on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists URL: https://github.com/apache/hadoop/pull/1695#issuecomment-551812971 didn't know it got used much. I've only just discovered how much listLocatedStatus was used. we could adopt it as it could be better for those massive directory listings -though S3Guard complicates life there. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16629) support copyFile in s3a filesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970195#comment-16970195 ] Steve Loughran commented on HADOOP-16629: - bq. S3guard is one of good reasons I think this API needs to be in Hadoop rather than forking a process out to run "s3 sync". The encryption problems are not specific to this API, because it is equally applicable here. Yes, but different file systems and may have different configurations such as S3Guard DDB tables -you need to know the specific table used by the source FS and query that for the directory listings, rather than use the the settings of the destination FS. Similarly, the secret key for SSE-C operations needs to be known in the source; you actually need to add that as one of the headers in the copy operation. Also, if you actually want a version of this CP which was optimised for this world, you would do multipart copies. That is your source would not be a simple URI, it would be a URI and a range; the result an opaque byte array containing the information needed to commit the request along with the rest of the initiated upload. Do you just want an S3 guard enabled version of "aws s3" what do you actually want a distcp which can do cross store copying? As I know which one will scale better. bq. Those particular problems aren't solved by ignoring them, I concur. bq. but they are also not solved by forcing a ViewFS + path mounts as a workaround for what you propose. I think you have misunderstood or I have explained badly. I wasn't trying to force a ViewFS model. I am trying to say * it is a lot harder than you think and just implementing Filesystem.copy(URI, URI) from a single file system isn't going to work. * we like our file system APIs to be stable and cross store. For now: use "aws s3" and then s3guard import to build up the table at the destination. > support copyFile in s3a filesystem > -- > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Assignee: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1698: HADOOP-15619. Bucket info to add more on authoritative mode
steveloughran commented on issue #1698: HADOOP-15619. Bucket info to add more on authoritative mode URL: https://github.com/apache/hadoop/pull/1698#issuecomment-551825876 checkstyle ``` ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1202: public int run(String[] args, PrintStream out):5: Method length is 174 lines (max allowed is 150). [MethodLength] ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1249: if (!authoritativePaths.isEmpty()): 'if' construct must use '{}'s. [NeedBraces] ./hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/S3GuardTool.java:1300: "The 'staging' committer is used -prefer the 'directory' committer");: Line is longer than 80 characters (found 81). [LineLength] ``` the if() clause may be a bug This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16697) audit/tune s3a authoritative flag in s3guard DDB Table
Steve Loughran created HADOOP-16697: --- Summary: audit/tune s3a authoritative flag in s3guard DDB Table Key: HADOOP-16697 URL: https://issues.apache.org/jira/browse/HADOOP-16697 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Steve Loughran S3A auth mode can cause confusion in deployments, because people expect there never to be any HTTP requests to S3 in a path marked as authoritative. This is *not* the case when S3Guard doesn't have an entry for the path in the table. Which is the state it is in when the directory was populated using different tools (e.g AWS s3 command). Proposed 1. HADOOP-16684 to give more diagnostics about the bucket 2. add an audit command to take a path and verify that it is marked in dynamoDB as authoritative *all the way down* This command is designed to be executed from the commandline and will return different error codes based on different situations * path isn't guarded * path is not authoritative in s3a settings (dir, path) * path not known in table: use the 404/44 response * path contains 1+ dir entry which is non-auth 3. Use this audit after some of the bulk rename, delete, import, commit (soon: upload, copy) operations to verify that's where appropriate, we do update the directories. Particularly for incremental rename() where I have long suspected we may have to do more there. 4. Review documentation and make it clear what is needed (import) after uploading/Generating Data through other tools. I'm going to pull in the open JIRAs on this topic as they are all related -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16697) audit/tune s3a authoritative flag in s3guard DDB Table
[ https://issues.apache.org/jira/browse/HADOOP-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970226#comment-16970226 ] Steve Loughran commented on HADOOP-16697: - +add some tests of listLocatedStatus, listFiles, listStatus to verify they don't go near S3 on parts they consider authoritative > audit/tune s3a authoritative flag in s3guard DDB Table > -- > > Key: HADOOP-16697 > URL: https://issues.apache.org/jira/browse/HADOOP-16697 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > S3A auth mode can cause confusion in deployments, because people expect there > never to be any HTTP requests to S3 in a path marked as authoritative. > This is *not* the case when S3Guard doesn't have an entry for the path in the > table. Which is the state it is in when the directory was populated using > different tools (e.g AWS s3 command). > Proposed > 1. HADOOP-16684 to give more diagnostics about the bucket > 2. add an audit command to take a path and verify that it is marked in > dynamoDB as authoritative *all the way down* > This command is designed to be executed from the commandline and will return > different error codes based on different situations > * path isn't guarded > * path is not authoritative in s3a settings (dir, path) > * path not known in table: use the 404/44 response > * path contains 1+ dir entry which is non-auth > 3. Use this audit after some of the bulk rename, delete, import, commit > (soon: upload, copy) operations to verify that's where appropriate, we do > update the directories. Particularly for incremental rename() where I have > long suspected we may have to do more there. > 4. Review documentation and make it clear what is needed (import) after > uploading/Generating Data through other tools. > I'm going to pull in the open JIRAs on this topic as they are all related -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970247#comment-16970247 ] Hadoop QA commented on HADOOP-16694: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 56s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 50s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 578 unchanged - 20 fixed = 579 total (was 598) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 20s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 24s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16694 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12985352/HADOOP-16694.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 5b8e829d8a1e 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 42fc888 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16651/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16651/testReport/ | | Max. process+thread count | 1359 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-com
[jira] [Commented] (HADOOP-16474) S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success
[ https://issues.apache.org/jira/browse/HADOOP-16474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970249#comment-16970249 ] Steve Loughran commented on HADOOP-16474: - actually, we can just do this in close() > S3Guard ProgressiveRenameTracker to mark dest dir as authoritative on success > - > > Key: HADOOP-16474 > URL: https://issues.apache.org/jira/browse/HADOOP-16474 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Major > > After a directory rename is successful, the destination will contain only > those files which have been copied by the S3guard-enabled client, with the > directory tree updated as new entries are added. > At that point, the ProgressiveRenameTracker could tell the store to complete > the rename and in so doing, give clients maximum performance without needing > any LIST commands. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16697) audit/tune s3a authoritative flag in s3guard DDB Table
[ https://issues.apache.org/jira/browse/HADOOP-16697?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970252#comment-16970252 ] Steve Loughran commented on HADOOP-16697: - Pulling in the related JIRAs (HADOOP-16684 and HADOOP-16474) Looking at s3guard import, it's not setting the auth flag either, even though its doing a recursive treewalk. Proposed: we do that :) > audit/tune s3a authoritative flag in s3guard DDB Table > -- > > Key: HADOOP-16697 > URL: https://issues.apache.org/jira/browse/HADOOP-16697 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > S3A auth mode can cause confusion in deployments, because people expect there > never to be any HTTP requests to S3 in a path marked as authoritative. > This is *not* the case when S3Guard doesn't have an entry for the path in the > table. Which is the state it is in when the directory was populated using > different tools (e.g AWS s3 command). > Proposed > 1. HADOOP-16684 to give more diagnostics about the bucket > 2. add an audit command to take a path and verify that it is marked in > dynamoDB as authoritative *all the way down* > This command is designed to be executed from the commandline and will return > different error codes based on different situations > * path isn't guarded > * path is not authoritative in s3a settings (dir, path) > * path not known in table: use the 404/44 response > * path contains 1+ dir entry which is non-auth > 3. Use this audit after some of the bulk rename, delete, import, commit > (soon: upload, copy) operations to verify that's where appropriate, we do > update the directories. Particularly for incremental rename() where I have > long suspected we may have to do more there. > 4. Review documentation and make it clear what is needed (import) after > uploading/Generating Data through other tools. > I'm going to pull in the open JIRAs on this topic as they are all related -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists
[ https://issues.apache.org/jira/browse/HADOOP-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HADOOP-16688: -- Status: Patch Available (was: Open) > Update Hadoop website to mention Ozone mailing lists > > > Key: HADOOP-16688 > URL: https://issues.apache.org/jira/browse/HADOOP-16688 > Project: Hadoop Common > Issue Type: Improvement > Components: website >Reporter: Arpit Agarwal >Priority: Major > > Now that Ozone has its separate mailing lists, let's list them on the Hadoop > website. > https://hadoop.apache.org/mailing_lists.html > Thanks to [~ayushtkn] for suggesting this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode
hadoop-yetus commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode URL: https://github.com/apache/hadoop/pull/1698#issuecomment-551865753 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 38 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1080 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 812 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 30 | trunk passed | | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 58 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 34 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 3 new + 5 unchanged - 0 fixed = 8 total (was 5) | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 792 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | -1 | findbugs | 66 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 85 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3342 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Switch statement found in org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], PrintStream) where one case falls through to the next case At S3GuardTool.java:PrintStream) where one case falls through to the next case At S3GuardTool.java:[lines 1299-1306] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1698 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ef0e1176d33e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/3/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/3/testReport/ | | Max. process+thread count | 411 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode
hadoop-yetus commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode URL: https://github.com/apache/hadoop/pull/1698#issuecomment-551870482 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 39 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1152 | trunk passed | | +1 | compile | 38 | trunk passed | | +1 | checkstyle | 28 | trunk passed | | +1 | mvnsite | 41 | trunk passed | | +1 | shadedclient | 816 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | trunk passed | | 0 | spotbugs | 62 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 59 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 36 | the patch passed | | +1 | compile | 32 | the patch passed | | +1 | javac | 32 | the patch passed | | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new + 5 unchanged - 0 fixed = 6 total (was 5) | | +1 | mvnsite | 32 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 786 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 24 | the patch passed | | -1 | findbugs | 67 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 85 | hadoop-aws in the patch passed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 3406 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Switch statement found in org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], PrintStream) where one case falls through to the next case At S3GuardTool.java:PrintStream) where one case falls through to the next case At S3GuardTool.java:[lines 1300-1307] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1698 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 12384b26c5db 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/4/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/4/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/4/testReport/ | | Max. process+thread count | 451 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1698/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16693) Review InterruptedException Handling
[ https://issues.apache.org/jira/browse/HADOOP-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970295#comment-16970295 ] Hadoop QA commented on HADOOP-16693: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 30m 9s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 2s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 11s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 5s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 2s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 14s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 16m 14s{color} | {color:red} root generated 7 new + 1890 unchanged - 0 fixed = 1897 total (was 1890) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 13s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 884 unchanged - 5 fixed = 885 total (was 889) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 49s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 12s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 7s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 56s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}134m 14s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1705 | | JIRA Issue | HADOOP-16693 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 499e154f5888 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/art
[GitHub] [hadoop] hadoop-yetus commented on issue #1705: HADOOP-16693: Review InterruptedException Handling
hadoop-yetus commented on issue #1705: HADOOP-16693: Review InterruptedException Handling URL: https://github.com/apache/hadoop/pull/1705#issuecomment-551870691 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1809 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 2 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1093 | trunk passed | | +1 | compile | 1022 | trunk passed | | +1 | checkstyle | 71 | trunk passed | | +1 | mvnsite | 86 | trunk passed | | +1 | shadedclient | 911 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 91 | trunk passed | | 0 | spotbugs | 125 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 122 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 48 | the patch passed | | +1 | compile | 974 | the patch passed | | -1 | javac | 974 | root generated 7 new + 1890 unchanged - 0 fixed = 1897 total (was 1890) | | -0 | checkstyle | 73 | hadoop-common-project/hadoop-common: The patch generated 1 new + 884 unchanged - 5 fixed = 885 total (was 889) | | +1 | mvnsite | 82 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 769 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 89 | the patch passed | | +1 | findbugs | 132 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 547 | hadoop-common in the patch passed. | | +1 | asflicense | 56 | The patch does not generate ASF License warnings. | | | | 8054 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1705 | | JIRA Issue | HADOOP-16693 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 499e154f5888 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/artifact/out/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1705/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1671: HADOOP-16665. Filesystems to be closed if they failed during initialize().
hadoop-yetus commented on issue #1671: HADOOP-16665. Filesystems to be closed if they failed during initialize(). URL: https://github.com/apache/hadoop/pull/1671#issuecomment-551874345 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 86 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 8 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 73 | Maven dependency ordering for branch | | +1 | mvninstall | 1241 | trunk passed | | +1 | compile | 1085 | trunk passed | | +1 | checkstyle | 175 | trunk passed | | +1 | mvnsite | 127 | trunk passed | | +1 | shadedclient | 1184 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 122 | trunk passed | | 0 | spotbugs | 76 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 221 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 24 | Maven dependency ordering for patch | | +1 | mvninstall | 89 | the patch passed | | +1 | compile | 1125 | the patch passed | | +1 | javac | 1125 | root generated 0 new + 1884 unchanged - 1 fixed = 1884 total (was 1885) | | -0 | checkstyle | 169 | root: The patch generated 3 new + 98 unchanged - 1 fixed = 101 total (was 99) | | +1 | mvnsite | 122 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 821 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 116 | the patch passed | | +1 | findbugs | 206 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 558 | hadoop-common in the patch passed. | | +1 | unit | 79 | hadoop-aws in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 7661 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1671/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1671 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 26ecad526057 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1671/4/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1671/4/testReport/ | | Max. process+thread count | 1474 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1671/4/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar commented on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists
sahilTakiar commented on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists URL: https://github.com/apache/hadoop/pull/1695#issuecomment-551880265 I think `listFiles(recursive=true)` was probably the more impactful change, especially for S3. It's not clear to me how much of an issue performance is with `listStatusIterator`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sahilTakiar edited a comment on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists
sahilTakiar edited a comment on issue #1695: HADOOP-16685: FileSystem#listStatusIterator does not check if given path exists URL: https://github.com/apache/hadoop/pull/1695#issuecomment-551880265 I think `listFiles(recursive=true)` was probably the more impactful change, especially for S3. It's not clear to me how much of an issue performance is with `listStatusIterator`. As in, I'm not sure it is worth optimizing the perf of `listStatusIterator` right now, I haven't seen any perf issues with it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970366#comment-16970366 ] Erik Krogen commented on HADOOP-16694: -- I wonder if it's also reasonable to replace instances of {{Preconditions.checkNotNull}} with {{Objects.requireNotNull}} to reduce our exposure to Guava a bit? But I guess the shading work makes that a less necessary endeavor. > Use Objects requireNull Where Appropriate > - > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16695) Make LogThrottlingHelper thread-safe
[ https://issues.apache.org/jira/browse/HADOOP-16695?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970367#comment-16970367 ] Erik Krogen commented on HADOOP-16695: -- Awesome, thanks [~zhangchen]! > Make LogThrottlingHelper thread-safe > > > Key: HADOOP-16695 > URL: https://issues.apache.org/jira/browse/HADOOP-16695 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: Chen Zhang >Assignee: Chen Zhang >Priority: Major > > HADOOP-15726 introduced the \{{LogThrottlingHelper}}, but this class is not > thread-safe, which limits its usage scenarios, this Jira will try to improve > it. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16692) UserGroupInformation Treats kerberosMinSecondsBeforeRelogin as Millis
[ https://issues.apache.org/jira/browse/HADOOP-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16692: Status: Patch Available (was: Open) > UserGroupInformation Treats kerberosMinSecondsBeforeRelogin as Millis > - > > Key: HADOOP-16692 > URL: https://issues.apache.org/jira/browse/HADOOP-16692 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-16692.1.patch, HADOOP-16692.2.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16694) Use Objects requireNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970393#comment-16970393 ] David Mollitor commented on HADOOP-16694: - [~xkrogen] Good idea. Let me take a look. > Use Objects requireNull Where Appropriate > - > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16694) Use Objects requireNonNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Erik Krogen updated HADOOP-16694: - Summary: Use Objects requireNonNull Where Appropriate (was: Use Objects requireNull Where Appropriate) > Use Objects requireNonNull Where Appropriate > > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16694) Use Objects requireNonNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970414#comment-16970414 ] Erik Krogen commented on HADOOP-16694: -- By the way, I updated the title to reflect that it is {{requireNonNull}} instead of {{requireNull}}. I was very confused when I first saw this come in with the old title :) > Use Objects requireNonNull Where Appropriate > > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16181) HadoopExecutors shutdown Cleanup
[ https://issues.apache.org/jira/browse/HADOOP-16181?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970447#comment-16970447 ] Steve Loughran commented on HADOOP-16181: - This is too noisy: we don't need to know when shutdowns work. Currently shutting down an s3a FS instance generates 4 lines of info {code} 2019-11-08 17:33:20,379 [main] INFO s3guard.S3GuardTool (S3GuardTool.java:run(1763)) - Audit scanned 1 directories 2019-11-08 17:33:20,381 [shutdown-hook-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(3117)) - Filesystem s3a://hwdev-steve-ireland-new is closed 2019-11-08 17:33:20,384 [shutdown-hook-0] INFO s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service. Waiting max 30 SECONDS 2019-11-08 17:33:20,384 [shutdown-hook-0] INFO s3a.S3AFileSystem (HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service 2019-11-08 17:33:20,384 [shutdown-hook-0] INFO s3a.S3AFileSystem (HadoopExecutors.java:shutdown(118)) - Gracefully shutting down executor service. Waiting max 30 SECONDS 2019-11-08 17:33:20,384 [shutdown-hook-0] INFO s3a.S3AFileSystem (HadoopExecutors.java:shutdown(129)) - Succesfully shutdown executor service {code} Could you do a followup which logs @ debug > HadoopExecutors shutdown Cleanup > > > Key: HADOOP-16181 > URL: https://issues.apache.org/jira/browse/HADOOP-16181 > Project: Hadoop Common > Issue Type: Improvement > Components: util >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Fix For: 3.2.1 > > Attachments: HADOOP-16181.1.patch, HADOOP-16181.2.patch > > > # Add method description > # Add additional logging > # Do not log-and-throw Exception. Anti-pattern. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16484) S3A to warn or fail if S3Guard is disabled
[ https://issues.apache.org/jira/browse/HADOOP-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970466#comment-16970466 ] Steve Loughran commented on HADOOP-16484: - S3A auth mode can cause confusion in deployments, because people expect there never to be any HTTP requests to S3 in a path marked as authoritative. This is *not* the case when S3Guard doesn't have an entry for the path in the table. Which is the state it is in when the directory was populated using different tools (e.g AWS s3 command). Proposed 1. HADOOP-16684 to give more diagnostics about the bucket 2. add an audit command to take a path and verify that it is marked in dynamoDB as authoritative *all the way down* This command is designed to be executed from the commandline and will return different error codes based on different situations * path isn't guarded * path is not authoritative in s3a settings (dir, path) * path not known in table: use the 404/44 response * path contains 1+ dir entry which is non-auth 3. Use this audit after some of the bulk rename, delete, import, commit (soon: upload, copy) operations to verify that's where appropriate, we do update the directories. Particularly for incremental rename() where I have long suspected we may have to do more there. 4. Review documentation and make it clear what is needed (import) after uploading/Generating Data through other tools. I'm going to pull in the open JIRAs on this topic as they are all related There shouldn't be anything wrong with using the AWS S3 command to create the test table -we just need to tell S3Guard to scan it afterwards, which "s3guard import" does. The audit command well make sure that everything is set up in DynamoDB before the next stage in the test suite. Then, if we still see IO against S3 during list operations, then we can start worrying about whether or not there is actally a bug in the s3a code. (we could use it after things like DDB and spark & hive queries too to validate the output is being tagged as auth too) +add some tests of listLocatedStatus, listFiles, listStatus to verify they don't go near S3 on parts they consider authoritative Examine the path metadata, declare whether it should be queued for recursive scanning @throws ExitUtil OK, this is good and I am already pleased to see it in my logs. But I realise we've missed something -in the s3guard tool we explicitly disable S3Guard when instantiating the FS. So we get warning messages which are not in fact correct. {code} 2019-11-08 17:38:35,656 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStoreClass(136)) - Metastore option source [fs.s3a.bucket.hwdev-steve-ireland-new.metadatastore.impl via [S3AUtils]] 2019-11-08 17:38:35,657 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStore(108)) - Using NullMetadataStore metadata store for s3a filesystem 2019-11-08 17:38:35,659 [main] INFO s3a.S3AFileSystem (S3Guard.java:logS3GuardDisabled(849)) - S3Guard is disabled on this bucket: hwdev-steve-ireland-new 2019-11-08 17:38:35,659 [main] DEBUG s3a.S3AUtils (S3AUtils.java:longOption(1001)) - Value of fs.s3a.multipart.purge.age is 360 2019-11-08 17:38:35,665 [main] DEBUG s3a.MultipartUtils (MultipartUtils.java:requestNextBatch(158)) - [1], Requesting next 5000 uploads prefix , next key null, next upload id null 2019-11-08 17:38:35,667 [main] DEBUG s3a.Invoker (DurationInfo.java:(74)) - Starting: listMultipartUploads 2019-11-08 17:38:36,004 [main] DEBUG s3a.Invoker (DurationInfo.java:close(89)) - listMultipartUploads: duration 0:00.338s 2019-11-08 17:38:36,005 [main] DEBUG s3a.MultipartUtils (MultipartUtils.java:requestNextBatch(165)) - New listing state: Upload iterator: prefix ; list count 2; isTruncated=false Total 0 uploads found. 2019-11-08 17:38:36,008 [shutdown-hook-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(3117)) - Filesystem s3a://hwdev-steve-ireland-new is closed {code} Proposed: just as we force in the null metastore, we will need to set the log to debug. I'm just going to reopen this as a followup. [~gabor.bota]: do you want to do this or shall I do the code and you do the review? > S3A to warn or fail if S3Guard is disabled > -- > > Key: HADOOP-16484 > URL: https://issues.apache.org/jira/browse/HADOOP-16484 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Fix For: 3.3.0 > > > A seemingly recurrent problem with s3guard is "people who think S3Guard is > turned on but really it isn't" > It's not immediately obvious this is the case, and the fact S3Guard is off > tends to surface after some intermittent failure has actually been detected. > Propose: add
[jira] [Comment Edited] (HADOOP-16484) S3A to warn or fail if S3Guard is disabled
[ https://issues.apache.org/jira/browse/HADOOP-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970466#comment-16970466 ] Steve Loughran edited comment on HADOOP-16484 at 11/8/19 5:44 PM: -- OK, this is good and I am already pleased to see it in my logs. But I realise we've missed something -in the s3guard tool we explicitly disable S3Guard when instantiating the FS. So we get warning messages which are not in fact correct. {code} 2019-11-08 17:38:35,656 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStoreClass(136)) - Metastore option source [fs.s3a.bucket.hwdev-steve-ireland-new.metadatastore.impl via [S3AUtils]] 2019-11-08 17:38:35,657 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStore(108)) - Using NullMetadataStore metadata store for s3a filesystem 2019-11-08 17:38:35,659 [main] INFO s3a.S3AFileSystem (S3Guard.java:logS3GuardDisabled(849)) - S3Guard is disabled on this bucket: hwdev-steve-ireland-new 2019-11-08 17:38:35,659 [main] DEBUG s3a.S3AUtils (S3AUtils.java:longOption(1001)) - Value of fs.s3a.multipart.purge.age is 360 2019-11-08 17:38:35,665 [main] DEBUG s3a.MultipartUtils (MultipartUtils.java:requestNextBatch(158)) - [1], Requesting next 5000 uploads prefix , next key null, next upload id null 2019-11-08 17:38:35,667 [main] DEBUG s3a.Invoker (DurationInfo.java:(74)) - Starting: listMultipartUploads 2019-11-08 17:38:36,004 [main] DEBUG s3a.Invoker (DurationInfo.java:close(89)) - listMultipartUploads: duration 0:00.338s 2019-11-08 17:38:36,005 [main] DEBUG s3a.MultipartUtils (MultipartUtils.java:requestNextBatch(165)) - New listing state: Upload iterator: prefix ; list count 2; isTruncated=false Total 0 uploads found. 2019-11-08 17:38:36,008 [shutdown-hook-0] DEBUG s3a.S3AFileSystem (S3AFileSystem.java:close(3117)) - Filesystem s3a://hwdev-steve-ireland-new is closed {code} Proposed: just as we force in the null metastore, we will need to set the log to debug. I'm just going to reopen this as a followup. [~gabor.bota]: do you want to do this or shall I do the code and you do the review? was (Author: ste...@apache.org): S3A auth mode can cause confusion in deployments, because people expect there never to be any HTTP requests to S3 in a path marked as authoritative. This is *not* the case when S3Guard doesn't have an entry for the path in the table. Which is the state it is in when the directory was populated using different tools (e.g AWS s3 command). Proposed 1. HADOOP-16684 to give more diagnostics about the bucket 2. add an audit command to take a path and verify that it is marked in dynamoDB as authoritative *all the way down* This command is designed to be executed from the commandline and will return different error codes based on different situations * path isn't guarded * path is not authoritative in s3a settings (dir, path) * path not known in table: use the 404/44 response * path contains 1+ dir entry which is non-auth 3. Use this audit after some of the bulk rename, delete, import, commit (soon: upload, copy) operations to verify that's where appropriate, we do update the directories. Particularly for incremental rename() where I have long suspected we may have to do more there. 4. Review documentation and make it clear what is needed (import) after uploading/Generating Data through other tools. I'm going to pull in the open JIRAs on this topic as they are all related There shouldn't be anything wrong with using the AWS S3 command to create the test table -we just need to tell S3Guard to scan it afterwards, which "s3guard import" does. The audit command well make sure that everything is set up in DynamoDB before the next stage in the test suite. Then, if we still see IO against S3 during list operations, then we can start worrying about whether or not there is actally a bug in the s3a code. (we could use it after things like DDB and spark & hive queries too to validate the output is being tagged as auth too) +add some tests of listLocatedStatus, listFiles, listStatus to verify they don't go near S3 on parts they consider authoritative Examine the path metadata, declare whether it should be queued for recursive scanning @throws ExitUtil OK, this is good and I am already pleased to see it in my logs. But I realise we've missed something -in the s3guard tool we explicitly disable S3Guard when instantiating the FS. So we get warning messages which are not in fact correct. {code} 2019-11-08 17:38:35,656 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStoreClass(136)) - Metastore option source [fs.s3a.bucket.hwdev-steve-ireland-new.metadatastore.impl via [S3AUtils]] 2019-11-08 17:38:35,657 [main] DEBUG s3guard.S3Guard (S3Guard.java:getMetadataStore(108)) - Using NullMetadataStore metadata store for s3a filesystem 2019-11-08 17:38:35,659 [main] INFO s3a.S3AFileSystem (S3Guard.j
[jira] [Reopened] (HADOOP-16484) S3A to warn or fail if S3Guard is disabled
[ https://issues.apache.org/jira/browse/HADOOP-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-16484: - > S3A to warn or fail if S3Guard is disabled > -- > > Key: HADOOP-16484 > URL: https://issues.apache.org/jira/browse/HADOOP-16484 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Fix For: 3.3.0 > > > A seemingly recurrent problem with s3guard is "people who think S3Guard is > turned on but really it isn't" > It's not immediately obvious this is the case, and the fact S3Guard is off > tends to surface after some intermittent failure has actually been detected. > Propose: add a configuration parameter which chooses what to do when an S3A > FS is instantiated without S3Guard > * silent : today; do nothing. > * status: give s3guard on/off status > * inform: log FS is instantiated without s3guard > * warn: Warn that data may be at risk in workflows > * fail > deployments could then choose which level of reaction they want. I'd make the > default "inform" for now; any on-prem object store deployment should switch > to silent, and if you really want strictness, fail is the ultimate option -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16694) Use Objects requireNonNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-16694: Attachment: HADOOP-16694.3.patch > Use Objects requireNonNull Where Appropriate > > > Key: HADOOP-16694 > URL: https://issues.apache.org/jira/browse/HADOOP-16694 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16694.1.patch, HADOOP-16694.2.patch, > HADOOP-16694.3.patch > > > https://docs.oracle.com/javase/8/docs/api/java/util/Objects.html#requireNonNull-T- -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16693) Review InterruptedException Handling
[ https://issues.apache.org/jira/browse/HADOOP-16693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970486#comment-16970486 ] David Mollitor commented on HADOOP-16693: - [~ste...@apache.org] Please review when you get a chance > Review InterruptedException Handling > > > Key: HADOOP-16693 > URL: https://issues.apache.org/jira/browse/HADOOP-16693 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16693.1.patch > > > Difficult to do well. I hopefully improved it some. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sunchao opened a new pull request #1706: HDFS-14959: [SBNN read] access time should be turned off
sunchao opened a new pull request #1706: HDFS-14959: [SBNN read] access time should be turned off URL: https://github.com/apache/hadoop/pull/1706 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16484) S3A to warn or fail if S3Guard is disabled
[ https://issues.apache.org/jira/browse/HADOOP-16484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970505#comment-16970505 ] Gabor Bota commented on HADOOP-16484: - Sure, I'm happy to do this. > S3A to warn or fail if S3Guard is disabled > -- > > Key: HADOOP-16484 > URL: https://issues.apache.org/jira/browse/HADOOP-16484 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > Fix For: 3.3.0 > > > A seemingly recurrent problem with s3guard is "people who think S3Guard is > turned on but really it isn't" > It's not immediately obvious this is the case, and the fact S3Guard is off > tends to surface after some intermittent failure has actually been detected. > Propose: add a configuration parameter which chooses what to do when an S3A > FS is instantiated without S3Guard > * silent : today; do nothing. > * status: give s3guard on/off status > * inform: log FS is instantiated without s3guard > * warn: Warn that data may be at risk in workflows > * fail > deployments could then choose which level of reaction they want. I'd make the > default "inform" for now; any on-prem object store deployment should switch > to silent, and if you really want strictness, fail is the ultimate option -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16692) UserGroupInformation Treats kerberosMinSecondsBeforeRelogin as Millis
[ https://issues.apache.org/jira/browse/HADOOP-16692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970513#comment-16970513 ] Hadoop QA commented on HADOOP-16692: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 34s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 43s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 77 unchanged - 2 fixed = 78 total (was 79) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 56s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 21s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 15s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}100m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16692 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12985303/HADOOP-16692.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 145f4c19f93b 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 42fc888 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16652/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16652/testReport/ | | Max. process+thread count | 1357 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-
[GitHub] [hadoop] steveloughran opened a new pull request #1707: HADOOP-16697. Tune/audit auth mode
steveloughran opened a new pull request #1707: HADOOP-16697. Tune/audit auth mode URL: https://github.com/apache/hadoop/pull/1707 This adds a new s3guard command to audit a s3guard bucket's authoritative state: hadoop s3guard authoritative -check-config s3a://landsat-pds Also adds more diags of what is going on, including a specific bulk operation type "listing" which is used for listing initiated updates. No tests or docs, yet. Contains and supersedes #1698 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode
steveloughran commented on issue #1698: HADOOP-16684. Bucket info to add more on authoritative mode URL: https://github.com/apache/hadoop/pull/1698#issuecomment-551947697 pulled into the bigger #1707 PR This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1706: HDFS-14959: [SBNN read] access time should be turned off
hadoop-yetus commented on issue #1706: HDFS-14959: [SBNN read] access time should be turned off URL: https://github.com/apache/hadoop/pull/1706#issuecomment-551949249 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 84 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1358 | trunk passed | | +1 | mvnsite | 84 | trunk passed | | +1 | shadedclient | 2327 | branch has no errors when building and testing our client artifacts. | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 72 | the patch passed | | +1 | mvnsite | 80 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 883 | patch has no errors when building and testing our client artifacts. | ||| _ Other Tests _ | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 3592 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1706/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1706 | | JIRA Issue | HDFS-14959 | | Optional Tests | dupname asflicense mvnsite | | uname | Linux 66ca89553e90 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1706/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on issue #1706: HDFS-14959: [SBNN read] access time should be turned off
jojochuang commented on issue #1706: HDFS-14959: [SBNN read] access time should be turned off URL: https://github.com/apache/hadoop/pull/1706#issuecomment-551955933 LGTM This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16694) Use Objects requireNonNull Where Appropriate
[ https://issues.apache.org/jira/browse/HADOOP-16694?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970543#comment-16970543 ] Hadoop QA commented on HADOOP-16694: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 28s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 55s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 new + 930 unchanged - 21 fixed = 932 total (was 951) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 45s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 0s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 41s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}104m 8s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16694 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12985379/HADOOP-16694.3.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b95500a88eba 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 42fc888 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16653/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16653/testReport/ | | Max. process+thread count | 1794 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-com
[jira] [Updated] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists
[ https://issues.apache.org/jira/browse/HADOOP-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HADOOP-16688: --- Resolution: Fixed Status: Resolved (was: Patch Available) +1 Merged via GitHub. Thanks [~ayushtkn]! > Update Hadoop website to mention Ozone mailing lists > > > Key: HADOOP-16688 > URL: https://issues.apache.org/jira/browse/HADOOP-16688 > Project: Hadoop Common > Issue Type: Improvement > Components: website >Reporter: Arpit Agarwal >Priority: Major > > Now that Ozone has its separate mailing lists, let's list them on the Hadoop > website. > https://hadoop.apache.org/mailing_lists.html > Thanks to [~ayushtkn] for suggesting this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16696) Adding an option to Always use Read Ahead, even for non sequential reads
[ https://issues.apache.org/jira/browse/HADOOP-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Da Zhou reassigned HADOOP-16696: Assignee: Saurabh > Adding an option to Always use Read Ahead, even for non sequential reads > > > Key: HADOOP-16696 > URL: https://issues.apache.org/jira/browse/HADOOP-16696 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Saurabh >Assignee: Saurabh >Priority: Major > Attachments: patch1.diff > > > Adding a config fs.azure.always.readahead, which is disabled by default, to > allow read ahead in case of non-sequential reads, such as when reading > parquet file in spark. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16688) Update Hadoop website to mention Ozone mailing lists
[ https://issues.apache.org/jira/browse/HADOOP-16688?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena reassigned HADOOP-16688: - Assignee: Ayush Saxena > Update Hadoop website to mention Ozone mailing lists > > > Key: HADOOP-16688 > URL: https://issues.apache.org/jira/browse/HADOOP-16688 > Project: Hadoop Common > Issue Type: Improvement > Components: website >Reporter: Arpit Agarwal >Assignee: Ayush Saxena >Priority: Major > > Now that Ozone has its separate mailing lists, let's list them on the Hadoop > website. > https://hadoop.apache.org/mailing_lists.html > Thanks to [~ayushtkn] for suggesting this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15829) Review of NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-15829: Status: Open (was: Patch Available) > Review of NetgroupCache > --- > > Key: HADOOP-15829 > URL: https://issues.apache.org/jira/browse/HADOOP-15829 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HDFS-13971.1.patch > > > * Simplify code and performance by using Guava Multimap -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15829) Review of NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-15829: Status: Patch Available (was: Open) > Review of NetgroupCache > --- > > Key: HADOOP-15829 > URL: https://issues.apache.org/jira/browse/HADOOP-15829 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-15829.2.patch, HDFS-13971.1.patch > > > * Simplify code and performance by using Guava Multimap -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15829) Review of NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-15829: Attachment: HADOOP-15829.2.patch > Review of NetgroupCache > --- > > Key: HADOOP-15829 > URL: https://issues.apache.org/jira/browse/HADOOP-15829 > Project: Hadoop Common > Issue Type: Improvement > Components: security >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Attachments: HADOOP-15829.2.patch, HDFS-13971.1.patch > > > * Simplify code and performance by using Guava Multimap -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sapant-msft opened a new pull request #1708: HADOOP-16696: Always read ahead config, to use read ahead even for non sequential reads.
sapant-msft opened a new pull request #1708: HADOOP-16696: Always read ahead config, to use read ahead even for non sequential reads. URL: https://github.com/apache/hadoop/pull/1708 Adding a config alwaysReadAhead, set to False by default, to be able to use abfs' read ahead capability, even for non-sequential reads. At the moment, only sequential reads support read ahead. A read ahead is queued only after a sequential read is made, so we miss out on gains where we have a non-sequential followed by a sequential read. For example, seek(n), read 1 byte, read 10 bytes , seek (m), could benefit from read ahead, but not currently supported. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode
hadoop-yetus commented on issue #1707: HADOOP-16697. Tune/audit auth mode URL: https://github.com/apache/hadoop/pull/1707#issuecomment-551980383 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 1970 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1417 | trunk passed | | +1 | compile | 38 | trunk passed | | +1 | checkstyle | 29 | trunk passed | | +1 | mvnsite | 43 | trunk passed | | +1 | shadedclient | 980 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 32 | trunk passed | | 0 | spotbugs | 70 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 67 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 39 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 19 | hadoop-tools/hadoop-aws: The patch generated 1 new + 39 unchanged - 0 fixed = 40 total (was 39) | | +1 | mvnsite | 33 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 994 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | -1 | findbugs | 82 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | +1 | unit | 80 | hadoop-aws in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 6013 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Switch statement found in org.apache.hadoop.fs.s3a.s3guard.S3GuardTool$BucketInfo.run(String[], PrintStream) where one case falls through to the next case At S3GuardTool.java:PrintStream) where one case falls through to the next case At S3GuardTool.java:[lines 1301-1308] | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1707 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e8c2c4b1f247 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/testReport/ | | Max. process+thread count | 454 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1707/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16696) Adding an option to Always use Read Ahead, even for non sequential reads
[ https://issues.apache.org/jira/browse/HADOOP-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970571#comment-16970571 ] Saurabh commented on HADOOP-16696: -- Hi Steve, Thanks for the suggestion, created PR : [link|[https://github.com/apache/hadoop/pull/1708]]. Thank You, Saurabh > Adding an option to Always use Read Ahead, even for non sequential reads > > > Key: HADOOP-16696 > URL: https://issues.apache.org/jira/browse/HADOOP-16696 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Saurabh >Assignee: Saurabh >Priority: Major > Attachments: patch1.diff > > > Adding a config fs.azure.always.readahead, which is disabled by default, to > allow read ahead in case of non-sequential reads, such as when reading > parquet file in spark. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16696) Adding an option to Always use Read Ahead, even for non sequential reads
[ https://issues.apache.org/jira/browse/HADOOP-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970571#comment-16970571 ] Saurabh edited comment on HADOOP-16696 at 11/8/19 8:55 PM: --- Hi Steve, Thanks for the suggestion, created PR : [https://github.com/apache/hadoop/pull/1708]. Thank You, Saurabh was (Author: saurabhpant): Hi Steve, Thanks for the suggestion, created PR : [link|[https://github.com/apache/hadoop/pull/1708]]. Thank You, Saurabh > Adding an option to Always use Read Ahead, even for non sequential reads > > > Key: HADOOP-16696 > URL: https://issues.apache.org/jira/browse/HADOOP-16696 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Saurabh >Assignee: Saurabh >Priority: Major > Attachments: patch1.diff > > > Adding a config fs.azure.always.readahead, which is disabled by default, to > allow read ahead in case of non-sequential reads, such as when reading > parquet file in spark. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeetesh Mangwani updated HADOOP-16612: -- Attachment: (was: HADOOP-16612-001.patch) > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > Attachments: HADOOP-16612.001.patch, HADOOP-16612.002.patch, > HADOOP-16612.003.patch > > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. > Here's the PR: https://github.com/apache/hadoop/pull/1611 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970578#comment-16970578 ] Jeetesh Mangwani commented on HADOOP-16612: --- Test results: non-xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 395, Failures: 0, Errors: 0, Skipped: 207 [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Tests run: 395, Failures: 1, Errors: 0, Skipped: 207 [ERROR] Failures: [ERROR] ITestGetNameSpaceEnabled.testXNSAccount:51->Assert.assertTrue:41->Assert.fail:88 Expecting getIsNamespaceEnabled() return true [WARNING] Tests run: 192, Failures: 0, Errors: 0, Skipped: 24 --- Comments: 1. ITestGetNameSpaceEnabled.testNonXNSAccount: fails because the HTTP response status is not 400, but is 404 3. ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek: times out, probably because this is a scale test and my VM is slow 4. ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads: times out, probably because there are lot of heavy writes and my VM is slow > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > Attachments: HADOOP-16612.001.patch, HADOOP-16612.002.patch, > HADOOP-16612.003.patch > > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. > Here's the PR: https://github.com/apache/hadoop/pull/1611 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jeeteshm edited a comment on issue #1611: HADOOP-16612 Track Azure Blob File System client-perceived latency
jeeteshm edited a comment on issue #1611: HADOOP-16612 Track Azure Blob File System client-perceived latency URL: https://github.com/apache/hadoop/pull/1611#issuecomment-551987988 > Looks good. Please fix the format issues and provide the tests result. @DadanielZ: I have fixed the format/checkstyle issues. Test results are mentioned in the JIRA here: https://issues.apache.org/jira/browse/HADOOP-16612 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jeeteshm commented on issue #1611: HADOOP-16612 Track Azure Blob File System client-perceived latency
jeeteshm commented on issue #1611: HADOOP-16612 Track Azure Blob File System client-perceived latency URL: https://github.com/apache/hadoop/pull/1611#issuecomment-551987988 > Looks good. Please fix the format issues and provide the tests result. I have fixed the format/checkstyle issues. Test results are mentioned in the JIRA here: https://issues.apache.org/jira/browse/HADOOP-16612 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970579#comment-16970579 ] Wei-Chiu Chuang commented on HADOOP-15686: -- Sorry, this is a long overdue task. I took a look at HADOOP-13597 and added back SLF4JBridgeHandler. > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr > - > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch, > HADOOP-15686.003.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpSer
[jira] [Updated] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15686: - Attachment: HADOOP-15686.003.patch > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr > - > > Key: HADOOP-15686 > URL: https://issues.apache.org/jira/browse/HADOOP-15686 > Project: Hadoop Common > Issue Type: Bug > Components: kms >Affects Versions: 3.0.0 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang >Priority: Major > Attachments: HADOOP-15686.001.patch, HADOOP-15686.002.patch, > HADOOP-15686.003.patch > > > After we switched underlying system of KMS from Tomcat to Jetty, we started > to observe a lot of bogus messages like the follow [1]. It is harmless but > very annoying. Let's suppress it in log4j configuration. > [1] > {quote} > Aug 20, 2018 11:26:17 AM > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator > buildModelAndSchemas > SEVERE: Failed to generate the schema for the JAX-B elements > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException: 2 counts of > IllegalAnnotationExceptions > java.util.Map is an interface, and JAXB can't handle interfaces. > this problem is related to the following location: > at java.util.Map > java.util.Map does not have a no-arg default constructor. > this problem is related to the following location: > at java.util.Map > at > com.sun.xml.bind.v2.runtime.IllegalAnnotationsException$Builder.check(IllegalAnnotationsException.java:106) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.getTypeInfoSet(JAXBContextImpl.java:489) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl.(JAXBContextImpl.java:319) > at > com.sun.xml.bind.v2.runtime.JAXBContextImpl$JAXBContextBuilder.build(JAXBContextImpl.java:1170) > at > com.sun.xml.bind.v2.ContextFactory.createContext(ContextFactory.java:145) > at sun.reflect.GeneratedMethodAccessor32.invoke(Unknown Source) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:247) > at javax.xml.bind.ContextFinder.newInstance(ContextFinder.java:234) > at javax.xml.bind.ContextFinder.find(ContextFinder.java:441) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:641) > at javax.xml.bind.JAXBContext.newInstance(JAXBContext.java:584) > at > com.sun.jersey.server.wadl.generators.WadlGeneratorJAXBGrammarGenerator.buildModelAndSchemas(WadlGeneratorJAXBGrammarGenerator.java:169) > at > com.sun.jersey.server.wadl.generators.AbstractWadlGeneratorGrammarGenerator.createExternalGrammar(AbstractWadlGeneratorGrammarGenerator.java:405) > at com.sun.jersey.server.wadl.WadlBuilder.generate(WadlBuilder.java:149) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:119) > at > com.sun.jersey.server.impl.wadl.WadlApplicationContextImpl.getApplication(WadlApplicationContextImpl.java:138) > at > com.sun.jersey.server.impl.wadl.WadlMethodFactory$WadlOptionsMethodDispatcher.dispatch(WadlMethodFactory.java:110) > at > com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108) > at > com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147) > at > com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542) > at > com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419) > at > com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409) > at > com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558) > at > com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733) > at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) > at > org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848) > at > org.ecl
[jira] [Commented] (HADOOP-16696) Adding an option to Always use Read Ahead, even for non sequential reads
[ https://issues.apache.org/jira/browse/HADOOP-16696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970593#comment-16970593 ] Hadoop QA commented on HADOOP-16696: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 41s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 52s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 18s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 37s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 22s{color} | {color:green} hadoop-azure in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 33s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 55m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1708 | | JIRA Issue | HADOOP-16696 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ff9d08f0db60 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | |
[GitHub] [hadoop] hadoop-yetus commented on issue #1708: HADOOP-16696: Always read ahead config, to use read ahead even for non sequential reads.
hadoop-yetus commented on issue #1708: HADOOP-16696: Always read ahead config, to use read ahead even for non sequential reads. URL: https://github.com/apache/hadoop/pull/1708#issuecomment-551998615 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 38 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1087 | trunk passed | | +1 | compile | 33 | trunk passed | | +1 | checkstyle | 26 | trunk passed | | +1 | mvnsite | 35 | trunk passed | | +1 | shadedclient | 821 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 28 | trunk passed | | 0 | spotbugs | 52 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 49 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 29 | the patch passed | | +1 | compile | 25 | the patch passed | | +1 | javac | 25 | the patch passed | | -0 | checkstyle | 18 | hadoop-tools/hadoop-azure: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2) | | +1 | mvnsite | 27 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 817 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 25 | the patch passed | | +1 | findbugs | 55 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 82 | hadoop-azure in the patch passed. | | +1 | asflicense | 33 | The patch does not generate ASF License warnings. | | | | 3335 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.4 Server=19.03.4 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1708 | | JIRA Issue | HADOOP-16696 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux ff9d08f0db60 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 42fc888 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/testReport/ | | Max. process+thread count | 452 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1708/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16612) Track Azure Blob File System client-perceived latency
[ https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970578#comment-16970578 ] Jeetesh Mangwani edited comment on HADOOP-16612 at 11/8/19 9:52 PM: [~DanielZhou] Test results: non-xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 395, Failures: 0, Errors: 0, Skipped: 207 [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Tests run: 395, Failures: 1, Errors: 0, Skipped: 207 [ERROR] Failures: [ERROR] ITestGetNameSpaceEnabled.testXNSAccount:51->Assert.assertTrue:41->Assert.fail:88 Expecting getIsNamespaceEnabled() return true [WARNING] Tests run: 192, Failures: 0, Errors: 0, Skipped: 24 --- Comments: 1. ITestGetNameSpaceEnabled.testNonXNSAccount: fails because the HTTP response status is not 400, but is 404 3. ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek: times out, probably because this is a scale test and my VM is slow 4. ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads: times out, probably because there are lot of heavy writes and my VM is slow was (Author: jeeteshm): Test results: non-xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 395, Failures: 0, Errors: 0, Skipped: 207 [ERROR] Tests run: 192, Failures: 0, Errors: 2, Skipped: 24 [ERROR] Errors: [ERROR] ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek:60->testReadWriteAndSeek:75 » TestTimedOut [ERROR] ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » TestTimedOut xns [INFO] Tests run: 52, Failures: 0, Errors: 0, Skipped: 0 [ERROR] Tests run: 395, Failures: 1, Errors: 0, Skipped: 207 [ERROR] Failures: [ERROR] ITestGetNameSpaceEnabled.testXNSAccount:51->Assert.assertTrue:41->Assert.fail:88 Expecting getIsNamespaceEnabled() return true [WARNING] Tests run: 192, Failures: 0, Errors: 0, Skipped: 24 --- Comments: 1. ITestGetNameSpaceEnabled.testNonXNSAccount: fails because the HTTP response status is not 400, but is 404 3. ITestAbfsReadWriteAndSeek.testReadAndWriteWithDifferentBufferSizesAndSeek: times out, probably because this is a scale test and my VM is slow 4. ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads: times out, probably because there are lot of heavy writes and my VM is slow > Track Azure Blob File System client-perceived latency > - > > Key: HADOOP-16612 > URL: https://issues.apache.org/jira/browse/HADOOP-16612 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure, hdfs-client >Reporter: Jeetesh Mangwani >Assignee: Jeetesh Mangwani >Priority: Major > Attachments: HADOOP-16612.001.patch, HADOOP-16612.002.patch, > HADOOP-16612.003.patch > > > Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency > in the Hadoop ABFS driver. > The latency information is sent back to the ADLS Gen 2 REST API endpoints in > the subsequent requests. > Here's the PR: https://github.com/apache/hadoop/pull/1611 -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15852) Refactor QuotaUsage
[ https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-15852: Attachment: HADOOP-15852.3.patch > Refactor QuotaUsage > --- > > Key: HADOOP-15852 > URL: https://issues.apache.org/jira/browse/HADOOP-15852 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch, > HADOOP-15852.3.patch > > > My new mission is to remove instances of {{StringBuffer}} in favor of > {{StringBuilder}}. > * Simplify Code > * Use Eclipse to generate hashcode/equals > * User StringBuilder instead of StringBuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15829) Review of NetgroupCache
[ https://issues.apache.org/jira/browse/HADOOP-15829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970613#comment-16970613 ] Hadoop QA commented on HADOOP-15829: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 54s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 24s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 13s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} hadoop-common-project/hadoop-common: The patch generated 0 new + 6 unchanged - 5 fixed = 6 total (was 11) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 17s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 18s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 46s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}106m 46s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-15829 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12985395/HADOOP-15829.2.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux bd2eda194c0f 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 42fc888 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16654/testReport/ | | Max. process+thread count | 1598 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16654/console | | Powered by | Apache Yetus 0.8.0 http
[jira] [Updated] (HADOOP-15852) Refactor QuotaUsage
[ https://issues.apache.org/jira/browse/HADOOP-15852?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] David Mollitor updated HADOOP-15852: Status: Patch Available (was: Reopened) I still can't figure out how these changes produced a regression in {{TestQuota}} but I made a fresh patch and {{TestQuota}} runs successful locally... finally time to close this one out. > Refactor QuotaUsage > --- > > Key: HADOOP-15852 > URL: https://issues.apache.org/jira/browse/HADOOP-15852 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > Fix For: 3.3.0 > > Attachments: HADOOP-15852.1.patch, HADOOP-15852.2.patch, > HADOOP-15852.3.patch > > > My new mission is to remove instances of {{StringBuffer}} in favor of > {{StringBuilder}}. > * Simplify Code > * Use Eclipse to generate hashcode/equals > * User StringBuilder instead of StringBuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15686) Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr
[ https://issues.apache.org/jira/browse/HADOOP-15686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970629#comment-16970629 ] Hadoop QA commented on HADOOP-15686: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 58s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 44s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 2s{color} | {color:green} hadoop-kms in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 1m 4s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 18s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.4 Server=19.03.4 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-15686 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12985397/HADOOP-15686.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux f38352903c5d 4.15.0-66-generic #75-Ubuntu SMP Tue Oct 1 05:24:09 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 42fc888 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16655/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-kms U: hadoop-common-project/hadoop-kms | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16655/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Supress bogus AbstractWadlGeneratorGrammarGenerator in KMS stderr > - > >
[jira] [Commented] (HADOOP-16656) Document FairCallQueue configs in core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16970631#comment-16970631 ] Siyao Meng commented on HADOOP-16656: - checkstyle warning seems unrelated. Patch rev 002 is ready for review. [~weichiu] > Document FairCallQueue configs in core-default.xml > -- > > Key: HADOOP-16656 > URL: https://issues.apache.org/jira/browse/HADOOP-16656 > Project: Hadoop Common > Issue Type: Task > Components: conf, documentation >Reporter: Siyao Meng >Assignee: Siyao Meng >Priority: Major > Attachments: HADOOP-16656.001.patch, HADOOP-16656.002.patch, > HADOOP-16656.003.patch > > > So far those callqueue / scheduler / faircallqueue -related configurations > are only documented in FairCallQueue.md in 3.3.0: > https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations > (Thanks Akira for uploading this.) > Goal: Document those configs in core-default.xml as well to make it easier > for users(admins) to find and use. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org