[GitHub] [hadoop] xiaoxiaopan118 opened a new pull request #1658: Merge pull request #1 from apache/trunk
xiaoxiaopan118 opened a new pull request #1658: Merge pull request #1 from apache/trunk URL: https://github.com/apache/hadoop/pull/1658 new pull ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1650: HDDS-2034. Async RATIS pipeline creation and destroy through datanode…
hadoop-yetus commented on issue #1650: HDDS-2034. Async RATIS pipeline creation and destroy through datanode… URL: https://github.com/apache/hadoop/pull/1650#issuecomment-542052548 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 41 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 2 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 16 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 67 | Maven dependency ordering for branch | | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. | | -1 | mvninstall | 40 | hadoop-ozone in trunk failed. | | -1 | compile | 20 | hadoop-hdds in trunk failed. | | -1 | compile | 16 | hadoop-ozone in trunk failed. | | +1 | checkstyle | 60 | trunk passed | | +1 | mvnsite | 0 | trunk passed | | +1 | shadedclient | 861 | branch has no errors when building and testing our client artifacts. | | -1 | javadoc | 23 | hadoop-hdds in trunk failed. | | -1 | javadoc | 20 | hadoop-ozone in trunk failed. | | 0 | spotbugs | 964 | Used deprecated FindBugs config; considering switching to SpotBugs. | | -1 | findbugs | 34 | hadoop-hdds in trunk failed. | | -1 | findbugs | 21 | hadoop-ozone in trunk failed. | ||| _ Patch Compile Tests _ | | 0 | mvndep | 29 | Maven dependency ordering for patch | | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. | | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. | | -1 | compile | 24 | hadoop-hdds in the patch failed. | | -1 | compile | 19 | hadoop-ozone in the patch failed. | | -1 | cc | 24 | hadoop-hdds in the patch failed. | | -1 | cc | 19 | hadoop-ozone in the patch failed. | | -1 | javac | 24 | hadoop-hdds in the patch failed. | | -1 | javac | 19 | hadoop-ozone in the patch failed. | | +1 | checkstyle | 57 | the patch passed | | +1 | mvnsite | 0 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 2 | The patch has no ill-formed XML file. | | +1 | shadedclient | 717 | patch has no errors when building and testing our client artifacts. | | -1 | javadoc | 23 | hadoop-hdds in the patch failed. | | -1 | javadoc | 20 | hadoop-ozone in the patch failed. | | -1 | findbugs | 31 | hadoop-hdds in the patch failed. | | -1 | findbugs | 21 | hadoop-ozone in the patch failed. | ||| _ Other Tests _ | | -1 | unit | 28 | hadoop-hdds in the patch failed. | | -1 | unit | 27 | hadoop-ozone in the patch failed. | | +1 | asflicense | 34 | The patch does not generate ASF License warnings. | | | | 2480 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1650 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml cc | | uname | Linux 9ab173466796 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 336abbd | | Default Java | 1.8.0_222 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-compile-hadoop-ozone.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-javadoc-hadoop-hdds.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-javadoc-hadoop-ozone.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-findbugs-hadoop-hdds.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-findbugs-hadoop-ozone.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-mvninstall-hadoop-hdds.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-mvninstall-hadoop-ozone.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-compile-hadoop-hdds.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artif
[GitHub] [hadoop] mukul1987 closed pull request #1593: HDDS-2204. Avoid buffer coping in checksum verification.
mukul1987 closed pull request #1593: HDDS-2204. Avoid buffer coping in checksum verification. URL: https://github.com/apache/hadoop/pull/1593 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 closed pull request #1643: HDDS-2278. Run S3 test suite on OM HA cluster.
bharatviswa504 closed pull request #1643: HDDS-2278. Run S3 test suite on OM HA cluster. URL: https://github.com/apache/hadoop/pull/1643 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bharatviswa504 closed pull request #1632: HDDS-2194. Replication of Container fails with Only closed containers…
bharatviswa504 closed pull request #1632: HDDS-2194. Replication of Container fails with Only closed containers… URL: https://github.com/apache/hadoop/pull/1632 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951436#comment-16951436 ] Wei-Chiu Chuang commented on HADOOP-15169: -- It is fine in the current Jetty version and Hadoop 3.3.0 where the default excluded protocols in Jetty and default enabled protocols in Hadoop don't overlap. The effect is the same where the order is reversed or not. If one day we update to a future Jetty version that excludes more protocols and some of which are permitted by Hadoop by default, I would like them not to be enabled (Jetty's exclude list takes precedence over include list) by default, unless user consciously update the hadoop.ssl.enabled.protocols configuration. > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.003.patch, HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951411#comment-16951411 ] Xiaoyu Yao commented on HADOOP-15169: - Thanks [~weichiu] for v3 patch. It looks good to me. One minor comments: In line 551, we use equals to compare the protocol strings, do we need to handle the case when the orders are different but the protocols are same? |551|if (!enabledProtocols.equals(SSLFactory.SSL_ENABLED_PROTOCOLS_DEFAULT)) {| > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.003.patch, HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951359#comment-16951359 ] Hadoop QA commented on HADOOP-15169: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 40s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 48 unchanged - 0 fixed = 49 total (was 48) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 9s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 28s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 14s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 99m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-15169 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982988/HADOOP-15169.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9d77e76a640e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 336abbd | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16593/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16593/testReport/ | | Max. process+thread count | 1360 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16593/console | | Powered
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
adoroszlai commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints URL: https://github.com/apache/hadoop/pull/1622#discussion_r334630866 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/ozoneimpl/ContainerDataScanner.java ## @@ -95,14 +97,19 @@ public void runIteration() { while (!stopping && itr.hasNext()) { Container c = itr.next(); if (c.shouldScanData()) { +ContainerData containerData = c.getContainerData(); +long containerId = containerData.getContainerID(); try { + logScanStart(containerData); if (!c.scanData(throttler, canceler)) { metrics.incNumUnHealthyContainers(); -controller.markContainerUnhealthy( -c.getContainerData().getContainerID()); +controller.markContainerUnhealthy(containerId); Review comment: I would avoid this for two reasons: 1. The full scan includes a scan of the metadata, too, and the failure may be due to metadata problem. Eg. if the `.container` file is missing or invalid etc. In that case we cannot update the timestamp in the file. 2. Unhealthy containers are skipped during further iterations, so the timestamp would not make much difference anyway. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] adoroszlai commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints
adoroszlai commented on a change in pull request #1622: HDDS-1228. Chunk Scanner Checkpoints URL: https://github.com/apache/hadoop/pull/1622#discussion_r334629787 ## File path: hadoop-hdds/container-service/src/main/java/org/apache/hadoop/ozone/container/common/impl/ContainerData.java ## @@ -89,7 +91,9 @@ private HddsVolume volume; private String checksum; - public static final Charset CHARSET_ENCODING = Charset.forName("UTF-8"); + private Long dataScanTimestamp; Review comment: Thanks for the comments. I will address these and update the pull request in the new repo. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2
[ https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HADOOP-15169: - Attachment: HADOOP-15169.003.patch > "hadoop.ssl.enabled.protocols" should be considered in httpserver2 > -- > > Key: HADOOP-15169 > URL: https://issues.apache.org/jira/browse/HADOOP-15169 > Project: Hadoop Common > Issue Type: Bug > Components: security >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula >Priority: Major > Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, > HADOOP-15169.003.patch, HADOOP-15169.patch > > > As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the > http servers( only Datanodehttp server will use this config). -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16638) Use Relative URLs in Hadoop KMS WebApps
[ https://issues.apache.org/jira/browse/HADOOP-16638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951215#comment-16951215 ] David Mollitor commented on HADOOP-16638: - [~ayushtkn] Thanks for commenting. This one I was unable to test since this project requires setting up kerberos authentication. The other Hadoop projects I was looking at, they allow a simple authentication scheme where the user name can be passed in the URL for testing purposes. To generate this patch, I used the same heuristic that I applied to the other places to make this patch. As a side note, even though this page is just a simple directory of all available links, it is protected by the authentication filter. This seems a bit overkill. The authentication should kick in when one of the links on this page is accessed. > Use Relative URLs in Hadoop KMS WebApps > --- > > Key: HADOOP-16638 > URL: https://issues.apache.org/jira/browse/HADOOP-16638 > Project: Hadoop Common > Issue Type: Sub-task > Components: kms >Affects Versions: 3.2.0 >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Major > Attachments: HADOOP-16638.1.patch, HADOOP-16638.2.patch, > HADOOP-16638.3.patch > > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] DadanielZ commented on issue #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI
DadanielZ commented on issue #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI URL: https://github.com/apache/hadoop/pull/1621#issuecomment-541830126 @steveloughran does it look good to you? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
[ https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951182#comment-16951182 ] Hadoop QA commented on HADOOP-16580: | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 40s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 3 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 19s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 16m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 114 unchanged - 2 fixed = 115 total (was 116) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 15s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 18s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 52s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 42s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}110m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16580 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982974/HADOOP-16580.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux faa4f8874233 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 5cc7873 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_222 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HADOOP-Build/16592/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HADOOP-Build/16592/testReport/ | | Max. process+thread count | 1344 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HADOOP-Build/16592/console | | Power
[GitHub] [hadoop] bgaborg commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
bgaborg commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#discussion_r334585565 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -2730,39 +2730,41 @@ S3AFileStatus innerGetFileStatus(final Path f, * @throws FileNotFoundException when the path does not exist * @throws IOException on other problems. */ + @VisibleForTesting @Retries.RetryTranslated - private S3AFileStatus s3GetFileStatus(final Path path, - String key, + S3AFileStatus s3GetFileStatus(final Path path, + final String key, final Set probes, final Set tombstones) throws IOException { -if (!key.isEmpty() && probes.contains(StatusProbeEnum.Head)) { - try { -ObjectMetadata meta = getObjectMetadata(key); - -if (objectRepresentsDirectory(key, meta.getContentLength())) { - LOG.debug("Found exact file: fake directory"); - return new S3AFileStatus(Tristate.TRUE, path, username); -} else { - LOG.debug("Found exact file: normal file"); +if (!key.isEmpty()) { + if (probes.contains(StatusProbeEnum.Head) && !key.endsWith("/")) { Review comment: @steveloughran This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5
hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5 URL: https://github.com/apache/hadoop/pull/1656#issuecomment-541809132 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 84 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 23 | Maven dependency ordering for branch | | +1 | mvninstall | 1209 | trunk passed | | +1 | compile | 1129 | trunk passed | | +1 | checkstyle | 182 | trunk passed | | +1 | mvnsite | 104 | trunk passed | | +1 | shadedclient | 1151 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 111 | trunk passed | | 0 | spotbugs | 137 | Used deprecated FindBugs config; considering switching to SpotBugs. | | 0 | findbugs | 32 | branch/hadoop-project no findbugs output file (findbugsXml.xml) | ||| _ Patch Compile Tests _ | | 0 | mvndep | 25 | Maven dependency ordering for patch | | +1 | mvninstall | 61 | the patch passed | | +1 | compile | 1045 | the patch passed | | -1 | javac | 1045 | root generated 12 new + 1845 unchanged - 0 fixed = 1857 total (was 1845) | | -0 | checkstyle | 188 | root: The patch generated 9 new + 381 unchanged - 0 fixed = 390 total (was 381) | | +1 | mvnsite | 115 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 826 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 124 | the patch passed | | 0 | findbugs | 26 | hadoop-project has no data from findbugs | ||| _ Other Tests _ | | +1 | unit | 26 | hadoop-project in the patch passed. | | +1 | unit | 637 | hadoop-common in the patch passed. | | +1 | asflicense | 48 | The patch does not generate ASF License warnings. | | | | 7434 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1656 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 1cc9f1ca945b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5cc7873 | | Default Java | 1.8.0_222 | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/1/artifact/out/diff-compile-javac-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/1/artifact/out/diff-checkstyle-root.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/1/testReport/ | | Max. process+thread count | 1348 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-common-project/hadoop-common U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-16652 started by Bilahari T H. - > Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to > branch-2 > -- > > Key: HADOOP-16652 > URL: https://issues.apache.org/jira/browse/HADOOP-16652 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > > Make AAD endpoint configurable on all Auth flows -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16652) Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951170#comment-16951170 ] Bilahari T H commented on HADOOP-16652: --- Driver test results using accounts in Central India: mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify Account without namespace support {code:java} [INFO] Tests run: 37, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 393, Failures: 0, Errors: 0, Skipped: 207 [WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 15{code} Account with namespace support {code:java} [INFO] Tests run: 37, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 393, Failures: 0, Errors: 0, Skipped: 21 [WARNING] Tests run: 151, Failures: 0, Errors: 0, Skipped: 15{code} > Backport HADOOP-16587 - "Make AAD endpoint configurable on all Auth flows" to > branch-2 > -- > > Key: HADOOP-16652 > URL: https://issues.apache.org/jira/browse/HADOOP-16652 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > > Make AAD endpoint configurable on all Auth flows -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #1657: HADOOP-16587. Make ABFS AAD endpoints configurable.
bilaharith opened a new pull request #1657: HADOOP-16587. Make ABFS AAD endpoints configurable. URL: https://github.com/apache/hadoop/pull/1657 Contributed by Bilahari T H. This also addresses HADOOP-16498: AzureADAuthenticator cannot authenticate in China. Change-Id: I2441dd48b50b59b912b0242f7f5a4418cf94a87c ## NOTICE Please create an issue in ASF JIRA before opening a pull request, and you need to set the title of the pull request which starts with the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.) For more details, please see https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16654) Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk
[ https://issues.apache.org/jira/browse/HADOOP-16654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sandeep Nemuri reassigned HADOOP-16654: --- Assignee: Sandeep Nemuri > Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk > - > > Key: HADOOP-16654 > URL: https://issues.apache.org/jira/browse/HADOOP-16654 > Project: Hadoop Common > Issue Type: Task >Reporter: Marton Elek >Assignee: Sandeep Nemuri >Priority: Major > > As described in the HDDS-2287 ozone/hdds sources are moving to the > apache/hadoop-ozone git repository. > All the remaining ozone/hdds files can be removed from trunk (including hdds > profile in main pom.xml) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus s"directories only" scan still does a HEAD
[ https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16635: Fix Version/s: 3.3.0 Resolution: Fixed Status: Resolved (was: Patch Available) > S3A innerGetFileStatus s"directories only" scan still does a HEAD > - > > Key: HADOOP-16635 > URL: https://issues.apache.org/jira/browse/HADOOP-16635 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > Fix For: 3.3.0 > > > The patch in HADOOP-16490 is incomplete: we are still checking for the Head > of each object, even though we only wanted the directory checks. As a result, > createFile is still vulnerable to 404 caching on unguarded S3 repos. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
steveloughran commented on issue #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#issuecomment-541796750 thx -merged This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran closed pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
steveloughran closed pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith merged pull request #1478: HDFS-14856. Fetch file ACLs while mounting external store.
virajith merged pull request #1478: HDFS-14856. Fetch file ACLs while mounting external store. URL: https://github.com/apache/hadoop/pull/1478 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] virajith commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store.
virajith commented on issue #1478: HDFS-14856. Fetch file ACLs while mounting external store. URL: https://github.com/apache/hadoop/pull/1478#issuecomment-541788211 Test failures are unrelated. Completing this PR. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951135#comment-16951135 ] Hadoop QA commented on HADOOP-16510: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 58s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 4s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 4s{color} | {color:red} root generated 1 new + 1843 unchanged - 1 fixed = 1844 total (was 1844) {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} hadoop-common-project: The patch generated 0 new + 794 unchanged - 17 fixed = 794 total (was 811) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 1s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 49s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 57s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 43s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 43s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 39s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.security.TestFixKerberosTicketOrder | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16510 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982966/HADOOP-16510.003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux fb188fe0f016 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build
[jira] [Moved] (HADOOP-16654) Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk
[ https://issues.apache.org/jira/browse/HADOOP-16654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal moved HDDS-2288 to HADOOP-16654: -- Key: HADOOP-16654 (was: HDDS-2288) Target Version/s: 3.3.0 (was: 0.5.0) Workflow: no-reopen-closed, patch-avail (was: patch-available, re-open possible) Project: Hadoop Common (was: Hadoop Distributed Data Store) > Delete hadoop-ozone and hadoop-hdds subprojects from apache trunk > - > > Key: HADOOP-16654 > URL: https://issues.apache.org/jira/browse/HADOOP-16654 > Project: Hadoop Common > Issue Type: Task >Reporter: Marton Elek >Priority: Major > > As described in the HDDS-2287 ozone/hdds sources are moving to the > apache/hadoop-ozone git repository. > All the remaining ozone/hdds files can be removed from trunk (including hdds > profile in main pom.xml) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16635) S3A innerGetFileStatus s"directories only" scan still does a HEAD
[ https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951118#comment-16951118 ] Hudson commented on HADOOP-16635: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17532 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17532/]) HADOOP-16635. S3A "directories only" scan still does a HEAD. (stevel: rev 74e5018d871bdf712b3ad0706150a37cb8efee5c) * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestAuthoritativePath.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3AFileOperationCost.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/ITestS3GuardTtl.java * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/impl/StatusProbeEnum.java > S3A innerGetFileStatus s"directories only" scan still does a HEAD > - > > Key: HADOOP-16635 > URL: https://issues.apache.org/jira/browse/HADOOP-16635 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > The patch in HADOOP-16490 is incomplete: we are still checking for the Head > of each object, even though we only wanted the directory checks. As a result, > createFile is still vulnerable to 404 caching on unguarded S3 repos. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951109#comment-16951109 ] Hudson commented on HADOOP-15870: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17531 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/17531/]) Revert "HADOOP-15870. S3AInputStream.remainingInFile should use (stevel: rev dee9e97075e67f53d033df522372064ca19d6b51) * (edit) hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java * (edit) hadoop-common-project/hadoop-common/src/site/markdown/filesystem/fsdatainputstream.md * (edit) hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/contract/s3a/ITestS3AContractSeek.java * (edit) hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/AbstractContractSeekTest.java > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1655: HADOOP-16629: support copyFile in s3afilesystem
hadoop-yetus commented on issue #1655: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1655#issuecomment-541766353 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 53 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 1 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 5 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 69 | Maven dependency ordering for branch | | +1 | mvninstall | 1125 | trunk passed | | +1 | compile | 1087 | trunk passed | | +1 | checkstyle | 162 | trunk passed | | +1 | mvnsite | 145 | trunk passed | | +1 | shadedclient | 1158 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 139 | trunk passed | | 0 | spotbugs | 71 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 202 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 27 | Maven dependency ordering for patch | | +1 | mvninstall | 90 | the patch passed | | +1 | compile | 1058 | the patch passed | | +1 | javac | 1058 | the patch passed | | -0 | checkstyle | 174 | root: The patch generated 13 new + 106 unchanged - 0 fixed = 119 total (was 106) | | +1 | mvnsite | 147 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | xml | 1 | The patch has no ill-formed XML file. | | +1 | shadedclient | 771 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 136 | the patch passed | | -1 | findbugs | 79 | hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) | ||| _ Other Tests _ | | -1 | unit | 538 | hadoop-common in the patch failed. | | +1 | unit | 96 | hadoop-aws in the patch passed. | | +1 | asflicense | 59 | The patch does not generate ASF License warnings. | | | | 7472 | | | Reason | Tests | |---:|:--| | FindBugs | module:hadoop-tools/hadoop-aws | | | Exceptional return value of java.util.concurrent.ThreadPoolExecutor.submit(Callable) ignored in org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(URI, URI) At S3AFileSystem.java:ignored in org.apache.hadoop.fs.s3a.S3AFileSystem.copyFile(URI, URI) At S3AFileSystem.java:[line 2873] | | Failed junit tests | hadoop.fs.TestFilterFileSystem | | | hadoop.fs.TestHarFileSystem | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1655 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux a0a88f07421f 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5f4641a | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/artifact/out/diff-checkstyle-root.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/artifact/out/new-findbugs-hadoop-tools_hadoop-aws.html | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/testReport/ | | Max. process+thread count | 1533 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1655/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951099#comment-16951099 ] Steve Loughran commented on HADOOP-15870: - Okay I reverted the patch. We can decide what to do at our leisure. I'm thinking we may need both of * Fix WebHDFSInputStream.available() * Allow FS contracts to skip those probes (for downstream uses) > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951089#comment-16951089 ] Steve Loughran commented on HADOOP-15870: - FWIW this is showing that WebHDFSInputStream.available() is always 0. To be purist, it should be forwarding the probe all the way to the input stream. So after a revert we could actually fix that > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Reopened] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reopened HADOOP-15870: - reoopening. Two options: revert or fix. I'm generally a fix-forward person for just test failures. I have no time to spare this week as I'm travelling. Reverting may be best for now. At least we know what extra tests to run! I did try to run the ones I knew about including HDFS and Azure, but must have missed this > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
[ https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951079#comment-16951079 ] Adam Antal commented on HADOOP-16580: - Thanks for the review [~snemeth]. I have uploaded patchset v3 which added javadocs to the classes that are affected by this patch. > Disable retry of FailoverOnNetworkExceptionRetry in case of > AccessControlException > -- > > Key: HADOOP-16580 > URL: https://issues.apache.org/jira/browse/HADOOP-16580 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch, > HADOOP-16580.003.patch > > > HADOOP-14982 handled the case where a SaslException is thrown. The issue > still persists, since the exception that is thrown is an > *AccessControlException* because user has no kerberos credentials. > My suggestion is that we should add this case as well to > {{FailoverOnNetworkExceptionRetry}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException
[ https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-16580: Attachment: HADOOP-16580.003.patch > Disable retry of FailoverOnNetworkExceptionRetry in case of > AccessControlException > -- > > Key: HADOOP-16580 > URL: https://issues.apache.org/jira/browse/HADOOP-16580 > Project: Hadoop Common > Issue Type: Bug > Components: common >Affects Versions: 3.3.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch, > HADOOP-16580.003.patch > > > HADOOP-14982 handled the case where a SaslException is thrown. The issue > still persists, since the exception that is thrown is an > *AccessControlException* because user has no kerberos credentials. > My suggestion is that we should add this case as well to > {{FailoverOnNetworkExceptionRetry}}. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.
[ https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951076#comment-16951076 ] Steve Loughran commented on HADOOP-13223: - You need to upgrade to version of snappy to deal with new platforms e.g. arm-64. Pure NIO would be best. It would also be much better in testing, where it is near impossible to get that native library on the CP. > winutils.exe is a bug nexus and should be killed with an axe. > - > > Key: HADOOP-13223 > URL: https://issues.apache.org/jira/browse/HADOOP-13223 > Project: Hadoop Common > Issue Type: Sub-task > Components: bin >Affects Versions: 2.6.0 > Environment: Microsoft Windows, all versions >Reporter: john lilley >Priority: Major > > winutils.exe was apparently created as a stopgap measure to allow Hadoop to > "work" on Windows platforms, because the NativeIO libraries aren't > implemented there (edit: even NativeIO probably doesn't cover the operations > that winutils.exe is used for). Rather than building a DLL that makes native > OS calls, the creators of winutils.exe must have decided that it would be > more expedient to create an EXE to carry out file system operations in a > linux-like fashion. Unfortunately, like many stopgap measures in software, > this one has persisted well beyond its expected lifetime and usefulness. My > team creates software that runs on Windows and Linux, and winutils.exe is > probably responsible for 20% of all issues we encounter, both during > development and in the field. > Problem #1 with winutils.exe is that it is simply missing from many popular > distros and/or the client-side software installation for said distros, when > supplied, fails to install winutils.exe. Thus, as software developers, we > are forced to pick one version and distribute and install it with our > software. > Which leads to problem #2: winutils.exe are not always compatible. In > particular, MapR MUST have its winutils.exe in the system path, but doing so > breaks the Hadoop distro for every other Hadoop vendor. This makes creating > and maintaining test environments that work with all of the Hadoop distros we > want to test unnecessarily tedious and error-prone. > Problem #3 is that the mechanism by which you inform the Hadoop client > software where to find winutils.exe is poorly documented and fragile. First, > it can be in the PATH. If it is in the PATH, that is where it is found. > However, the documentation, such as it is, makes no mention of this, and > instead says that you should set the HADOOP_HOME environment variable, which > does NOT override the winutils.exe found in your system PATH. > Which leads to problem #4: There is no logging that says where winutils.exe > was actually found and loaded. Because of this, fixing problems of finding > the wrong winutils.exe are extremely difficult. > Problem #5 is that most of the time, such as when accessing straight up HDFS > and YARN, one does not *need* winutils.exe. But if it is missing, the log > messages complain about its absence. When we are trying to diagnose an > obscure issue in Hadoop (of which there are many), the presence of this red > herring leads to all sorts of time wasted until someone on the team points > out that winutils.exe is not the problem, at least not this time. > Problem #6 is that errors and stack traces from issues involving winutils.exe > are not helpful. The Java stack trace ends at the ProcessBuilder call. Only > through bitter experience is one able to connect the dots from > "ProcessBuilder is the last thing on the stack" to "something is wrong with > winutils.exe". > Note that none of these involve running Hadoop on Windows. They are only > encountered when using Hadoop client libraries to access a cluster from > Windows. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #936: YARN-9605.Add ZkConfiguredFailoverProxyProvider for RM HA
hadoop-yetus commented on issue #936: YARN-9605.Add ZkConfiguredFailoverProxyProvider for RM HA URL: https://github.com/apache/hadoop/pull/936#issuecomment-541752190 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 86 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | 0 | mvndep | 71 | Maven dependency ordering for branch | | +1 | mvninstall | 1436 | trunk passed | | +1 | compile | 1353 | trunk passed | | +1 | checkstyle | 195 | trunk passed | | +1 | mvnsite | 292 | trunk passed | | +1 | shadedclient | 1442 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 238 | trunk passed | | 0 | spotbugs | 135 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 547 | trunk passed | ||| _ Patch Compile Tests _ | | 0 | mvndep | 26 | Maven dependency ordering for patch | | +1 | mvninstall | 208 | the patch passed | | +1 | compile | 1286 | the patch passed | | -1 | cc | 1286 | root generated 5 new + 21 unchanged - 5 fixed = 26 total (was 26) | | +1 | javac | 1286 | the patch passed | | -0 | checkstyle | 203 | root: The patch generated 22 new + 21 unchanged - 0 fixed = 43 total (was 21) | | +1 | mvnsite | 289 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 844 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 221 | the patch passed | | +1 | findbugs | 521 | the patch passed | ||| _ Other Tests _ | | -1 | unit | 541 | hadoop-common in the patch failed. | | +1 | unit | 58 | hadoop-yarn-api in the patch passed. | | +1 | unit | 247 | hadoop-yarn-common in the patch passed. | | +1 | unit | 5306 | hadoop-yarn-server-resourcemanager in the patch passed. | | +1 | asflicense | 46 | The patch does not generate ASF License warnings. | | | | 15271 | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.security.TestRaceWhenRelogin | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/936 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 86a3e4441a46 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5f4641a | | Default Java | 1.8.0_222 | | cc | https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/artifact/out/diff-compile-cc-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/testReport/ | | Max. process+thread count | 1584 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-936/12/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16635) S3A innerGetFileStatus s"directories only" scan still does a HEAD
[ https://issues.apache.org/jira/browse/HADOOP-16635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16635: Summary: S3A innerGetFileStatus s"directories only" scan still does a HEAD (was: S3A innerGetFileStatus scans for directories-only still does a HEAD) > S3A innerGetFileStatus s"directories only" scan still does a HEAD > - > > Key: HADOOP-16635 > URL: https://issues.apache.org/jira/browse/HADOOP-16635 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Blocker > > The patch in HADOOP-16490 is incomplete: we are still checking for the Head > of each object, even though we only wanted the directory checks. As a result, > createFile is still vulnerable to 404 caching on unguarded S3 repos. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan edited a comment on issue #1655: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan edited a comment on issue #1655: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1655#issuecomment-541741337 Test results with 8 parallel threads. (region=us-west-2) ``` Tests run: 1101, Failures: 5, Errors: 24, Skipped: 318 ``` Errors are not related to this patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan edited a comment on issue #1655: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan edited a comment on issue #1655: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1655#issuecomment-541741337 Test results with 8 parallel threads. ``` Tests run: 1101, Failures: 5, Errors: 24, Skipped: 318 ``` Errors are not related to this patch. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan commented on issue #1655: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan commented on issue #1655: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1655#issuecomment-541741337 With 8 parallel threads. Errors are not related to this patch. ``` Tests run: 1101, Failures: 5, Errors: 24, Skipped: 318 ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-16510: Attachment: HADOOP-16510.003.patch > [hadoop-common] Fix order of actual and expected expression in assert > statements > > > Key: HADOOP-16510 > URL: https://issues.apache.org/jira/browse/HADOOP-16510 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch, > HADOOP-16510.003.patch > > > Fix order of actual and expected expression in assert statements which gives > misleading message when test case fails. Attached file has some of the places > where it is placed wrongly. > {code:java} > [ERROR] > testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) > Time elapsed: 3.385 s <<< FAILURE! > java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but > was:<0> > {code} > For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be > used for new test cases which avoids such mistakes. > This is a follow-up jira for the hadoop-common project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951044#comment-16951044 ] Adam Antal commented on HADOOP-16510: - Javac error is irrelevant as I didn't add that call, just rewrite the assertion surrounding it. Fixed last checkstyle in [^HADOOP-16510.003.patch]. > [hadoop-common] Fix order of actual and expected expression in assert > statements > > > Key: HADOOP-16510 > URL: https://issues.apache.org/jira/browse/HADOOP-16510 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch, > HADOOP-16510.003.patch > > > Fix order of actual and expected expression in assert statements which gives > misleading message when test case fails. Attached file has some of the places > where it is placed wrongly. > {code:java} > [ERROR] > testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) > Time elapsed: 3.385 s <<< FAILURE! > java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but > was:<0> > {code} > For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be > used for new test cases which avoids such mistakes. > This is a follow-up jira for the hadoop-common project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16653) S3Guard DDB overreacts to no tag access
[ https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951043#comment-16951043 ] Gabor Bota commented on HADOOP-16653: - It was cleary in the docs, so I'll update that as well: {{s3guard.md}}: {noformat} *Note*: If the user does not have sufficient rights to tag the table, but it can read the tags the initialization of S3Guard will not fail, but there will be no version marker tag on the dynamo table and the following message will be logged on WARN level: ``` Exception during tagging table: {AmazonDynamoDBException exception message} ``` {noformat} > S3Guard DDB overreacts to no tag access > --- > > Key: HADOOP-16653 > URL: https://issues.apache.org/jira/browse/HADOOP-16653 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > > if you don't have permissions to read or write DDB tags it logs a lot every > time you bring up a guarded FS > # we shouldn't worry so much about no tag access if version is there > # if you can't read the tag, no point trying to write -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16653) S3Guard DDB overreacts to no tag access
[ https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951043#comment-16951043 ] Gabor Bota edited comment on HADOOP-16653 at 10/14/19 2:47 PM: --- It was cleary in the docs, so I'll update that as well: {{s3guard.md}}: {noformat} *Note*: If the user does not have sufficient rights to tag the table the initialization of S3Guard will not fail, but there will be no version marker tag on the dynamo table and the following message will be logged on WARN level: ``` Exception during tagging table: {AmazonDynamoDBException exception message} ``` {noformat} was (Author: gabor.bota): It was cleary in the docs, so I'll update that as well: {{s3guard.md}}: {noformat} *Note*: If the user does not have sufficient rights to tag the table, but it can read the tags the initialization of S3Guard will not fail, but there will be no version marker tag on the dynamo table and the following message will be logged on WARN level: ``` Exception during tagging table: {AmazonDynamoDBException exception message} ``` {noformat} > S3Guard DDB overreacts to no tag access > --- > > Key: HADOOP-16653 > URL: https://issues.apache.org/jira/browse/HADOOP-16653 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > > if you don't have permissions to read or write DDB tags it logs a lot every > time you bring up a guarded FS > # we shouldn't worry so much about no tag access if version is there > # if you can't read the tag, no point trying to write -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-13223) winutils.exe is a bug nexus and should be killed with an axe.
[ https://issues.apache.org/jira/browse/HADOOP-13223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951041#comment-16951041 ] john lilley commented on HADOOP-13223: -- I've recently started working with the snappy compressor, and many Hadoop libraries rely on it as well. It is impressive that a user of the library doesn't need to know anything about the native code – the dll/so is cached to temp disk upon first use and loaded. If native calls are still necessary to achieve linux FS emulation (instead of the NIO ACL interface) it would be better to emulate this approach as it would eliminate questions of version and compatibility. > winutils.exe is a bug nexus and should be killed with an axe. > - > > Key: HADOOP-13223 > URL: https://issues.apache.org/jira/browse/HADOOP-13223 > Project: Hadoop Common > Issue Type: Sub-task > Components: bin >Affects Versions: 2.6.0 > Environment: Microsoft Windows, all versions >Reporter: john lilley >Priority: Major > > winutils.exe was apparently created as a stopgap measure to allow Hadoop to > "work" on Windows platforms, because the NativeIO libraries aren't > implemented there (edit: even NativeIO probably doesn't cover the operations > that winutils.exe is used for). Rather than building a DLL that makes native > OS calls, the creators of winutils.exe must have decided that it would be > more expedient to create an EXE to carry out file system operations in a > linux-like fashion. Unfortunately, like many stopgap measures in software, > this one has persisted well beyond its expected lifetime and usefulness. My > team creates software that runs on Windows and Linux, and winutils.exe is > probably responsible for 20% of all issues we encounter, both during > development and in the field. > Problem #1 with winutils.exe is that it is simply missing from many popular > distros and/or the client-side software installation for said distros, when > supplied, fails to install winutils.exe. Thus, as software developers, we > are forced to pick one version and distribute and install it with our > software. > Which leads to problem #2: winutils.exe are not always compatible. In > particular, MapR MUST have its winutils.exe in the system path, but doing so > breaks the Hadoop distro for every other Hadoop vendor. This makes creating > and maintaining test environments that work with all of the Hadoop distros we > want to test unnecessarily tedious and error-prone. > Problem #3 is that the mechanism by which you inform the Hadoop client > software where to find winutils.exe is poorly documented and fragile. First, > it can be in the PATH. If it is in the PATH, that is where it is found. > However, the documentation, such as it is, makes no mention of this, and > instead says that you should set the HADOOP_HOME environment variable, which > does NOT override the winutils.exe found in your system PATH. > Which leads to problem #4: There is no logging that says where winutils.exe > was actually found and loaded. Because of this, fixing problems of finding > the wrong winutils.exe are extremely difficult. > Problem #5 is that most of the time, such as when accessing straight up HDFS > and YARN, one does not *need* winutils.exe. But if it is missing, the log > messages complain about its absence. When we are trying to diagnose an > obscure issue in Hadoop (of which there are many), the presence of this red > herring leads to all sorts of time wasted until someone on the team points > out that winutils.exe is not the problem, at least not this time. > Problem #6 is that errors and stack traces from issues involving winutils.exe > are not helpful. The Java stack trace ends at the ProcessBuilder call. Only > through bitter experience is one able to connect the dots from > "ProcessBuilder is the last thing on the stack" to "something is wrong with > winutils.exe". > Note that none of these involve running Hadoop on Windows. They are only > encountered when using Hadoop client libraries to access a cluster from > Windows. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951029#comment-16951029 ] Mate Szalay-Beko edited comment on HADOOP-16579 at 10/14/19 2:22 PM: - I was able to find the root cause of the {{TestZKFailoverController}} failures, I created a new PR ([PR-1656|https://github.com/apache/hadoop/pull/1656]). Let's see if any other tests will fail. was (Author: symat): I was able to find the root cause of the `TestZKFailoverController` failures, I created a new PR. Let's see if any other tests will fail. > Upgrade to Apache Curator 4.2.0 in Hadoop > - > > Key: HADOOP-16579 > URL: https://issues.apache.org/jira/browse/HADOOP-16579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mate Szalay-Beko >Assignee: Norbert Kalmár >Priority: Major > > Currently in Hadoop we are using [ZooKeeper version > 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90]. > ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains > many new features (including SSL related improvements which can be very > important for production use; see [the release > notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]). > Apache Curator is a high level ZooKeeper client library, that makes it easier > to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator > 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91] > and [in Ozone we use Curator > 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146]. > Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator > 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, > the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and > 3.5.x. (see [the relevant Curator > page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects > have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other > components are doing it right now (e.g. Hive). > *The aims of this task are* to: > - change Curator version in Hadoop to the latest stable 4.x version > (currently 4.2.0) > - also make sure we don't have multiple ZooKeeper versions in the classpath > to avoid runtime problems (it is > [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the > ZooKeeper which come with Curator, so that there will be only a single > ZooKeeper version used runtime in Hadoop) > In this ticket we still don't want to change the default ZooKeeper version in > Hadoop, we only want to make it possible for the community to be able to > build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the > ZooKeeper communication with SSL, what is only supported in the new ZooKeeper > version). Upgrading to Curator 4.x should keep Hadoop to be compatible with > both ZooKeeper 3.4 and 3.5. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16951029#comment-16951029 ] Mate Szalay-Beko commented on HADOOP-16579: --- I was able to find the root cause of the `TestZKFailoverController` failures, I created a new PR. Let's see if any other tests will fail. > Upgrade to Apache Curator 4.2.0 in Hadoop > - > > Key: HADOOP-16579 > URL: https://issues.apache.org/jira/browse/HADOOP-16579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mate Szalay-Beko >Assignee: Norbert Kalmár >Priority: Major > > Currently in Hadoop we are using [ZooKeeper version > 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90]. > ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains > many new features (including SSL related improvements which can be very > important for production use; see [the release > notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]). > Apache Curator is a high level ZooKeeper client library, that makes it easier > to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator > 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91] > and [in Ozone we use Curator > 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146]. > Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator > 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, > the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and > 3.5.x. (see [the relevant Curator > page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects > have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other > components are doing it right now (e.g. Hive). > *The aims of this task are* to: > - change Curator version in Hadoop to the latest stable 4.x version > (currently 4.2.0) > - also make sure we don't have multiple ZooKeeper versions in the classpath > to avoid runtime problems (it is > [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the > ZooKeeper which come with Curator, so that there will be only a single > ZooKeeper version used runtime in Hadoop) > In this ticket we still don't want to change the default ZooKeeper version in > Hadoop, we only want to make it possible for the community to be able to > build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the > ZooKeeper communication with SSL, what is only supported in the new ZooKeeper > version). Upgrading to Curator 4.x should keep Hadoop to be compatible with > both ZooKeeper 3.4 and 3.5. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] symat opened a new pull request #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5
symat opened a new pull request #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5 URL: https://github.com/apache/hadoop/pull/1656 In this PR we upgraded the Apache Curator to 4.0.2 and Apache ZooKeeper to 3.5.5. While the new ZooKeeper is backward-compatible with the old one, we encountered a few minor issues still. So far the following changes have been made: - I added a static initializer for the unit tests using ZooKeeper to enable the four-letter-words diagnostic telnet commands. This is an interface that become disabled by default due to security concerns. (see https://issues.apache.org/jira/browse/ZOOKEEPER-2693) To keep the ZooKeeper 3.4.x behaviour, we enabled it for the tests. Some tests in Hadoop (or other projects using Hadoop-Common) might actually use this feature e.g. to verify the status of ZooKeeper. - I also fixed `ZKFailoverController` to look for relevant fail-over ActiveAttempt records. The new ZooKeeper seems to respond quicker during the fail-over tests than the old ZooKeeper, so we made sure to catch all the relevant records by adding a new parameter to `ZKFailoverontroller.waitForActiveAttempt()`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode
elek commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode URL: https://github.com/apache/hadoop/pull/1431#issuecomment-541700703 > CI build failed on new PR: apache/hadoop-ozone#13. Could you please take a look? Sorry, this is may fault. One commit is missing from all of the created pr branches. (restore the README.txt) 1. You can rebase to the latest master (the safest choice) 2. OR Locally you can create an empty README.txt as a workaround 3. I also modified the CI script to handle all of these branches, so it should work from now. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1591#issuecomment-541693736 Sorry about the merge mess up. I have created PR: https://github.com/apache/hadoop/pull/1655 for this. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan opened a new pull request #1655: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan opened a new pull request #1655: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1655 This is subtask of HADOOP-16604 which aims to provide copy functionality for cloud native applications. Intent of this PR is to provide copyFile(URI src, URI dst) functionality for S3AFileSystem (HADOOP-16629). Creating new PR due to a merge mess up in https://github.com/apache/hadoop/pull/1591. Changes w.r.t PR:1591: 1. Fixed doc (filesystem.md) 2. Fixed AbstractContractCopyTest. 3. If file already exists in destination, it would overwrite dest file. 4. Added CompletableFuture support. `public CompletableFuture copyFile(URI srcFile, URI dstFile)` CompletableFuture makes the API nicer. However, `CompletableFuture::get --> waitingAndGet` invokes `Runtime.getAvailableProcessors` frequently. This can turn out to be expensive native call depending on workload. We can optimise this later, if it turns out to be an issue. If the destination bucket is different, relevant persmissions/policies have to be already setup, without which it would throw exceptions. Providing URI instead of path, makes it easier to mention different buckets on need basis. Since this is yet to stabilize in implemetation, we can make relevant changes in the store. Testing was done in region=us-west-2 on my local laptop. Contract tests and huge file tests passed . Other tests are still running and I will post the results. (ITestS3AContractRename failed, but not related to this patch) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bgaborg commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD.
bgaborg commented on a change in pull request #1601: HADOOP-16635. S3A innerGetFileStatus scans for directories-only still does a HEAD. URL: https://github.com/apache/hadoop/pull/1601#discussion_r334491571 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java ## @@ -2730,39 +2730,41 @@ S3AFileStatus innerGetFileStatus(final Path f, * @throws FileNotFoundException when the path does not exist * @throws IOException on other problems. */ + @VisibleForTesting @Retries.RetryTranslated - private S3AFileStatus s3GetFileStatus(final Path path, - String key, + S3AFileStatus s3GetFileStatus(final Path path, + final String key, final Set probes, final Set tombstones) throws IOException { -if (!key.isEmpty() && probes.contains(StatusProbeEnum.Head)) { - try { -ObjectMetadata meta = getObjectMetadata(key); - -if (objectRepresentsDirectory(key, meta.getContentLength())) { - LOG.debug("Found exact file: fake directory"); - return new S3AFileStatus(Tristate.TRUE, path, username); -} else { - LOG.debug("Found exact file: normal file"); +if (!key.isEmpty()) { + if (probes.contains(StatusProbeEnum.Head) && !key.endsWith("/")) { Review comment: this is handled in https://issues.apache.org/jira/browse/HADOOP-15430 right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem
rbalamohan commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1591#issuecomment-541674323 Please ignore the last wrong commit. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16629) support copyFile in s3afilesystem
[ https://issues.apache.org/jira/browse/HADOOP-16629?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950993#comment-16950993 ] Hadoop QA commented on HADOOP-16629: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 12s{color} | {color:red} https://github.com/apache/hadoop/pull/1591 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | GITHUB PR | https://github.com/apache/hadoop/pull/1591 | | JIRA Issue | HADOOP-16629 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. > support copyFile in s3afilesystem > - > > Key: HADOOP-16629 > URL: https://issues.apache.org/jira/browse/HADOOP-16629 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.2.1 >Reporter: Rajesh Balamohan >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem
hadoop-yetus commented on issue #1591: HADOOP-16629: support copyFile in s3afilesystem URL: https://github.com/apache/hadoop/pull/1591#issuecomment-541673708 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 0 | Docker mode activated. | | -1 | patch | 12 | https://github.com/apache/hadoop/pull/1591 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/1591 | | JIRA Issue | HADOOP-16629 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1591/5/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950982#comment-16950982 ] Hadoop QA commented on HADOOP-16510: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 2s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 51s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 11s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 43s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 43s{color} | {color:red} root generated 1 new + 1843 unchanged - 1 fixed = 1844 total (was 1844) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 1m 0s{color} | {color:orange} hadoop-common-project: The patch generated 1 new + 794 unchanged - 17 fixed = 795 total (was 811) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 50s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 45s{color} | {color:green} hadoop-nfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}105m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:104ccca9169 | | JIRA Issue | HADOOP-16510 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12982942/HADOOP-16510.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 52555dc1dda7 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provide
[GitHub] [hadoop] hadoop-yetus commented on issue #1654: YARN-9689: Support proxy user for Router to support kerberos
hadoop-yetus commented on issue #1654: YARN-9689: Support proxy user for Router to support kerberos URL: https://github.com/apache/hadoop/pull/1654#issuecomment-541645052 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 99 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | -1 | test4tests | 0 | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1291 | trunk passed | | +1 | compile | 24 | trunk passed | | +1 | checkstyle | 20 | trunk passed | | +1 | mvnsite | 27 | trunk passed | | +1 | shadedclient | 852 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 22 | trunk passed | | 0 | spotbugs | 41 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 39 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 23 | the patch passed | | +1 | compile | 18 | the patch passed | | +1 | javac | 18 | the patch passed | | +1 | checkstyle | 14 | the patch passed | | +1 | mvnsite | 22 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 854 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 21 | the patch passed | | +1 | findbugs | 44 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 97 | hadoop-yarn-server-router in the patch passed. | | +1 | asflicense | 29 | The patch does not generate ASF License warnings. | | | | 3606 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1654/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1654 | | JIRA Issue | YARN-9689 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 4e20a1c0f05c 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5f4641a | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1654/1/testReport/ | | Max. process+thread count | 692 (vs. ulimit of 5500) | | modules | C: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router U: hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-router | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1654/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1653: HADOOP-16653. S3Guard DDB overreacts to no tag access.
hadoop-yetus commented on issue #1653: HADOOP-16653. S3Guard DDB overreacts to no tag access. URL: https://github.com/apache/hadoop/pull/1653#issuecomment-541631030 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 81 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 1 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1270 | trunk passed | | +1 | compile | 36 | trunk passed | | +1 | checkstyle | 27 | trunk passed | | +1 | mvnsite | 40 | trunk passed | | +1 | shadedclient | 862 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 25 | trunk passed | | 0 | spotbugs | 90 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 87 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 42 | the patch passed | | +1 | compile | 28 | the patch passed | | +1 | javac | 28 | the patch passed | | +1 | checkstyle | 21 | the patch passed | | +1 | mvnsite | 39 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 896 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 26 | the patch passed | | +1 | findbugs | 72 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 92 | hadoop-aws in the patch passed. | | +1 | asflicense | 30 | The patch does not generate ASF License warnings. | | | | 3769 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1653/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1653 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux fc2f28ee986e 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5f4641a | | Default Java | 1.8.0_222 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1653/1/testReport/ | | Max. process+thread count | 336 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1653/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] caneGuy opened a new pull request #1654: YARN-9689: Support proxy user for Router to support kerberos
caneGuy opened a new pull request #1654: YARN-9689: Support proxy user for Router to support kerberos URL: https://github.com/apache/hadoop/pull/1654 When we enable kerberos in YARN-Federation mode, we can not get new app since it will throw kerberos exception below.Which should be handled! ` 2019-07-22,18:43:25,523 WARN org.apache.hadoop.ipc.Client: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 2019-07-22,18:43:25,528 WARN org.apache.hadoop.yarn.server.router.clientrm.FederationClientInterceptor: Unable to create a new ApplicationId in SubCluster xxx java.io.IOException: DestHost:destPort xxx , LocalHost:localPort xxx. Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831) ` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950908#comment-16950908 ] Adam Antal commented on HADOOP-16510: - Uploaded patchset v2 - fixing checkstyle and failed tests. Also, a short notice to everyone: patch v1 introduced a timeout in {{TestLightWeightResizableGSet}} because {{assertThat$contains}} does not default to {{Iterable$contains}}, it uses a stream-filter on object-collect-assertNotEmpty solution which is *very* inefficient (thus the timeout) - also that would not test the behaviour of the {{LightWeightResizableGSet}} anyways. Fixed all of this in patchset v2. > [hadoop-common] Fix order of actual and expected expression in assert > statements > > > Key: HADOOP-16510 > URL: https://issues.apache.org/jira/browse/HADOOP-16510 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch > > > Fix order of actual and expected expression in assert statements which gives > misleading message when test case fails. Attached file has some of the places > where it is placed wrongly. > {code:java} > [ERROR] > testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) > Time elapsed: 3.385 s <<< FAILURE! > java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but > was:<0> > {code} > For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be > used for new test cases which avoids such mistakes. > This is a follow-up jira for the hadoop-common project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements
[ https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Adam Antal updated HADOOP-16510: Attachment: HADOOP-16510.002.patch > [hadoop-common] Fix order of actual and expected expression in assert > statements > > > Key: HADOOP-16510 > URL: https://issues.apache.org/jira/browse/HADOOP-16510 > Project: Hadoop Common > Issue Type: Sub-task >Affects Versions: 3.2.0 >Reporter: Adam Antal >Assignee: Adam Antal >Priority: Major > Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch > > > Fix order of actual and expected expression in assert statements which gives > misleading message when test case fails. Attached file has some of the places > where it is placed wrongly. > {code:java} > [ERROR] > testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService) > Time elapsed: 3.385 s <<< FAILURE! > java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but > was:<0> > {code} > For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be > used for new test cases which avoids such mistakes. > This is a follow-up jira for the hadoop-common project. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard
hadoop-yetus removed a comment on issue #1646: HADOOP-15430. hadoop fs -mkdir -p path-ending-with-slash/ fails with s3guard URL: https://github.com/apache/hadoop/pull/1646#issuecomment-54084 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | 0 | reexec | 45 | Docker mode activated. | ||| _ Prechecks _ | | +1 | dupname | 0 | No case conflicting files found. | | +1 | @author | 0 | The patch does not contain any @author tags. | | +1 | test4tests | 0 | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 | mvninstall | 1311 | trunk passed | | +1 | compile | 38 | trunk passed | | +1 | checkstyle | 25 | trunk passed | | +1 | mvnsite | 41 | trunk passed | | +1 | shadedclient | 824 | branch has no errors when building and testing our client artifacts. | | +1 | javadoc | 32 | trunk passed | | 0 | spotbugs | 63 | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 | findbugs | 61 | trunk passed | ||| _ Patch Compile Tests _ | | +1 | mvninstall | 38 | the patch passed | | +1 | compile | 29 | the patch passed | | +1 | javac | 29 | the patch passed | | -0 | checkstyle | 21 | hadoop-tools/hadoop-aws: The patch generated 4 new + 18 unchanged - 0 fixed = 22 total (was 18) | | +1 | mvnsite | 34 | the patch passed | | +1 | whitespace | 0 | The patch has no whitespace issues. | | +1 | shadedclient | 878 | patch has no errors when building and testing our client artifacts. | | +1 | javadoc | 27 | the patch passed | | +1 | findbugs | 64 | the patch passed | ||| _ Other Tests _ | | +1 | unit | 91 | hadoop-aws in the patch passed. | | +1 | asflicense | 35 | The patch does not generate ASF License warnings. | | | | 3707 | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.3 Server=19.03.3 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1646 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 00309628183a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / ec86f42 | | Default Java | 1.8.0_222 | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/testReport/ | | Max. process+thread count | 412 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1646/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16653) S3Guard DDB overreacts to no tag access
[ https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950899#comment-16950899 ] Steve Loughran commented on HADOOP-16653: - Certainly on read access denied, I'd like to see : silence and no attempt to update. What about the sequence: read tag, tag, notfound, attempt write? Let's make that an info not a warning. Warnings create support calls > S3Guard DDB overreacts to no tag access > --- > > Key: HADOOP-16653 > URL: https://issues.apache.org/jira/browse/HADOOP-16653 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > > if you don't have permissions to read or write DDB tags it logs a lot every > time you bring up a guarded FS > # we shouldn't worry so much about no tag access if version is there > # if you can't read the tag, no point trying to write -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16653) S3Guard DDB overreacts to no tag access
[ https://issues.apache.org/jira/browse/HADOOP-16653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950897#comment-16950897 ] Steve Loughran commented on HADOOP-16653: - Log {code} 2019-10-14 11:22:44,587 [JUnit-testRestrictDDBTagAccess] WARN s3guard.DynamoDBMetadataStoreTableManager (DynamoDBMetadataStoreTableManager.java:getVersionMarkerFromTags(255)) - Exception while getting tags from the dynamo table: User: arn:aws:sts::980678866538:assumed-role/stevel-s3guard/test is not authorized to perform: dynamodb:ListTagsOfResource on resource: arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: P9V270FPO034B5E55QLRCJK8UVVV4KQNSO5AEMVJF66Q9ASUAAJG) 2019-10-14 11:22:44,587 [JUnit-testRestrictDDBTagAccess] INFO s3guard.DynamoDBMetadataStoreTableManager (DynamoDBMetadataStoreTableManager.java:verifyVersionCompatibility(417)) - Table hwdev-steve-ireland-new contains no version marker TAG but contains compatible version marker ITEM. Restoring the version marker item from item. 2019-10-14 11:22:44,622 [JUnit-testRestrictDDBTagAccess] WARN s3guard.DynamoDBMetadataStoreTableManager (DynamoDBMetadataStoreTableManager.java:tagTableWithVersionMarker(238)) - Exception during tagging table: User: arn:aws:sts::980678866538:assumed-role/stevel-s3guard/test is not authorized to perform: dynamodb:TagResource on resource: arn:aws:dynamodb:eu-west-1:980678866538:table/hwdev-steve-ireland-new (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: AccessDeniedException; Request ID: {code} > S3Guard DDB overreacts to no tag access > --- > > Key: HADOOP-16653 > URL: https://issues.apache.org/jira/browse/HADOOP-16653 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Gabor Bota >Priority: Minor > > if you don't have permissions to read or write DDB tags it logs a lot every > time you bring up a guarded FS > # we shouldn't worry so much about no tag access if version is there > # if you can't read the tag, no point trying to write -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran opened a new pull request #1653: HADOOP-16653. S3Guard DDB overreacts to no tag access.
steveloughran opened a new pull request #1653: HADOOP-16653. S3Guard DDB overreacts to no tag access. URL: https://github.com/apache/hadoop/pull/1653 Initial PR just creates the test to demonstrate the issue. Change-Id: I8ab98acbdf3d854491571ee98627f96a98cbde48 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-16653) S3Guard DDB overreacts to no tag access
Steve Loughran created HADOOP-16653: --- Summary: S3Guard DDB overreacts to no tag access Key: HADOOP-16653 URL: https://issues.apache.org/jira/browse/HADOOP-16653 Project: Hadoop Common Issue Type: Sub-task Components: fs/s3 Affects Versions: 3.3.0 Reporter: Steve Loughran Assignee: Gabor Bota if you don't have permissions to read or write DDB tags it logs a lot every time you bring up a guarded FS # we shouldn't worry so much about no tag access if version is there # if you can't read the tag, no point trying to write -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16642) ITestDynamoDBMetadataStoreScale fails when throttled.
[ https://issues.apache.org/jira/browse/HADOOP-16642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16642: Status: Patch Available (was: Open) > ITestDynamoDBMetadataStoreScale fails when throttled. > - > > Key: HADOOP-16642 > URL: https://issues.apache.org/jira/browse/HADOOP-16642 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3, test >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Major > > ITestDynamoDBMetadataStoreScale tries to create a scale test iff the table > isn't PAYG. Its failing with the wrong text being returned. > Proposed: don't look for any text > {code} > 13:06:22 java.lang.AssertionError: > 13:06:22 Expected throttling message: Expected to find ' This may be because > the write threshold of DynamoDB is set too low.' > but got unexpected exception: > org.apache.hadoop.fs.s3a.AWSServiceThrottledException: > Put tombstone on s3a://fake-bucket/moved-here: > com.amazonaws.services.dynamodbv2.model.ProvisionedThroughputExceededException: > > The level of configured provisioned throughput for the table was exceeded. > Consider increasing your provisioning level with the UpdateTable API. > (Service: AmazonDynamoDBv2; Status Code: 400; Error Code: > ProvisionedThroughputExceededException; > Request ID: L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG): > The level of configured provisioned throughput for the table was exceeded. > Consider increasing your provisioning level with the UpdateTable API. > (Service: AmazonDynamoDBv2; Status Code: 400; > Error Code: ProvisionedThroughputExceededException; Request ID: > L12H9UM7PE8K0ILPGGTF4QG367VV4KQNSO5AEMVJF66Q9ASUAAJG) > 13:06:22 at > org.apache.hadoop.fs.s3a.S3AUtils.translateDynamoDBException(S3AUtils.java:402) > 13 > {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-12007) GzipCodec native CodecPool leaks memory
[ https://issues.apache.org/jira/browse/HADOOP-12007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950864#comment-16950864 ] Victor Zhang commented on HADOOP-12007: --- I have the same problem too when spark streaming program save data to hdfs with gzip. > GzipCodec native CodecPool leaks memory > --- > > Key: HADOOP-12007 > URL: https://issues.apache.org/jira/browse/HADOOP-12007 > Project: Hadoop Common > Issue Type: Bug >Affects Versions: 2.7.0 >Reporter: Yejun Yang >Priority: Major > > org/apache/hadoop/io/compress/GzipCodec.java call > CompressionCodec.Util.createOutputStreamWithCodecPool to use CodecPool. But > compressor objects are actually never returned to pool which cause memory > leak. > HADOOP-10591 uses CompressionOutputStream.close() to return Compressor object > to pool. But CompressionCodec.Util.createOutputStreamWithCodecPool actually > returns a CompressorStream which overrides close(). > This cause CodecPool.returnCompressor never being called. In my log file I > can see lots of "Got brand-new compressor [.gz]" but no "Got recycled > compressor". -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] sodonnel closed pull request #1615: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states
sodonnel closed pull request #1615: HDDS-2196 Add CLI Commands and Protobuf messages to trigger decom states URL: https://github.com/apache/hadoop/pull/1615 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] timmylicheng commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode
timmylicheng commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode URL: https://github.com/apache/hadoop/pull/1431#issuecomment-541576917 > > Ok I will send out a new PR. How to do mvn build under new repo now? I was not able to do it under hadoop-ozone directory. > > Thank you very much, to take care about the migration of your PRs. (Unfortunately I can't do it with github api as I can't fake the reporter and I would like to keep it) > > Regarding the build in the new repo. You can do it from the root level of the project: > > 1. do `mvn clean install -f pom.ozone.xml -DskipTests` > 2. or rebase and do a simple `mvn clean install -DskipTests` > > (( > > 1. One of the benefit to use separated repo is to create new README/CONTRIBUTION.md where we can add these information. I opened HDDS-2292 and HDDS-2293. > 2. The other benefit is to use simple top level pom.xml. I just merged #10, but you need to rebase to use it. >)) CI build failed on new PR: https://github.com/apache/hadoop-ozone/pull/13. Could you please take a look? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop
[ https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950823#comment-16950823 ] Norbert Kalmár commented on HADOOP-16579: - If we only update Curator to 4.2.0, TestZKSignerSecretProvider will fail because curator is looking for a non-existent field: java.lang.NoSuchFieldError: configFileStr. My understanding is that curator decides runtime if it runs with 3.4 or 3.5 ZK compatibility mode. Error logs show otherwise unfortunately. (I tried both excluding and leaving curator’s ZK dependency version) If we also update ZooKeeper to 3.5.5, TestZKSignerSecretProvider will run just fine, but TestZKFailoverController will Timeout because ZKFailoverController.doGracefulFailover() no longer works (I think it can’t read back which node became active. Maybe has to do with the need in 3.5 to whitelist 4 letter word commands?) > Upgrade to Apache Curator 4.2.0 in Hadoop > - > > Key: HADOOP-16579 > URL: https://issues.apache.org/jira/browse/HADOOP-16579 > Project: Hadoop Common > Issue Type: Improvement >Reporter: Mate Szalay-Beko >Assignee: Norbert Kalmár >Priority: Major > > Currently in Hadoop we are using [ZooKeeper version > 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90]. > ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains > many new features (including SSL related improvements which can be very > important for production use; see [the release > notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]). > Apache Curator is a high level ZooKeeper client library, that makes it easier > to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator > 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91] > and [in Ozone we use Curator > 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146]. > Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator > 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, > the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and > 3.5.x. (see [the relevant Curator > page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects > have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other > components are doing it right now (e.g. Hive). > *The aims of this task are* to: > - change Curator version in Hadoop to the latest stable 4.x version > (currently 4.2.0) > - also make sure we don't have multiple ZooKeeper versions in the classpath > to avoid runtime problems (it is > [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the > ZooKeeper which come with Curator, so that there will be only a single > ZooKeeper version used runtime in Hadoop) > In this ticket we still don't want to change the default ZooKeeper version in > Hadoop, we only want to make it possible for the community to be able to > build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the > ZooKeeper communication with SSL, what is only supported in the new ZooKeeper > version). Upgrading to Curator 4.x should keep Hadoop to be compatible with > both ZooKeeper 3.4 and 3.5. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nkalmar closed pull request #1629: HADOOP-16579 - Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.5
nkalmar closed pull request #1629: HADOOP-16579 - Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.5 URL: https://github.com/apache/hadoop/pull/1629 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] nkalmar commented on issue #1629: HADOOP-16579 - Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.5
nkalmar commented on issue #1629: HADOOP-16579 - Upgrade to Apache Curator 4.2.0 and ZooKeeper 3.5.5 URL: https://github.com/apache/hadoop/pull/1629#issuecomment-541559263 If we only update Curator to 4.2.0, TestZKSignerSecretProvider will fail because curator is looking for a non-existent field: java.lang.NoSuchFieldError: configFileStr. My understanding is that curator decides runtime if it runs with 3.4 or 3.5 ZK compatibility mode. Error logs show otherwise unfortunately. (I tried both excluding and leaving curator’s ZK dependency version) If we also update ZooKeeper to 3.5.5, TestZKSignerSecretProvider will run just fine, but TestZKFailoverController will Timeout because ZKFailoverController.doGracefulFailover() no longer works (I think it can’t read back which node became active. Maybe has to do with the need in 3.5 to whitelist 4 letter word commands?) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950806#comment-16950806 ] lqjacklee commented on HADOOP-15870: [~ayushtkn] Thank you very much. We will check it. > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek closed pull request #1586: HDDS-2240. Command line tool for OM HA.
elek closed pull request #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #1586: HDDS-2240. Command line tool for OM HA.
elek commented on issue #1586: HDDS-2240. Command line tool for OM HA. URL: https://github.com/apache/hadoop/pull/1586#issuecomment-541550284 Thanks the migration @hanishakoneru (and BTW, thanks the update, I will check it). I am closing this one as we have the new one. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] elek commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode
elek commented on issue #1431: HDDS-1569 Support creating multiple pipelines with same datanode URL: https://github.com/apache/hadoop/pull/1431#issuecomment-541548955 > Ok I will send out a new PR. How to do mvn build under new repo now? I was not able to do it under hadoop-ozone directory. Thank you very much, to take care about the migration of your PRs. (Unfortunately I can't do it with github api as I can't fake the reporter and I would like to keep it) Regarding the build in the new repo. You can do it from the root level of the project: 1. do `mvn clean install -f pom.ozone.xml -DskipTests` 2. or rebase and do a simple `mvn clean install -DskipTests` (( 1. One of the benefit to use separated repo is to create new README/CONTRIBUTION.md where we can add these information. I opened HDDS-2292 and HDDS-2293. 2. The other benefit is to use simple top level pom.xml. I just merged #10, but you need to rebase to use it. )) This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos
[ https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16950795#comment-16950795 ] Ayush Saxena commented on HADOOP-15870: --- Hi [~ste...@apache.org] [~Jack-Lee] Seems this change broke {{TestRouterWebHDFSContractSeek}} Can you give a check once... Ref : https://builds.apache.org/job/PreCommit-HDFS-Build/28085/testReport/org.apache.hadoop.fs.contract.router.web/TestRouterWebHDFSContractSeek/ > S3AInputStream.remainingInFile should use nextReadPos > - > > Key: HADOOP-15870 > URL: https://issues.apache.org/jira/browse/HADOOP-15870 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 2.8.4, 3.1.1 >Reporter: Shixiong Zhu >Assignee: lqjacklee >Priority: Major > Fix For: 3.3.0 > > Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, > HADOOP-15870-004.patch, HADOOP-15870-005.patch > > > Otherwise `remainingInFile` will not change after `seek`. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org