[GitHub] [hadoop] hadoop-yetus commented on pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
hadoop-yetus commented on pull request #1858: URL: https://github.com/apache/hadoop/pull/1858#issuecomment-618801665 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 0s | Docker mode activated. | | -1 :x: | patch | 0m 5s | https://github.com/apache/hadoop/pull/1858 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. | | Subsystem | Report/Notes | |--:|:-| | GITHUB PR | https://github.com/apache/hadoop/pull/1858 | | JIRA Issue | HDFS-15168 | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1858/3/console | | versions | git=2.17.1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on pull request #1975: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
bilaharith commented on pull request #1975: URL: https://github.com/apache/hadoop/pull/1975#issuecomment-618800382 > I see the comments in the [old PR ](https://github.com/apache/hadoop/pull/1969) has been resolved. > LGTM, +1. > @bilaharith please close the[ old one](https://github.com/apache/hadoop/pull/1969) to avoid confusion. Done. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16739) Fix native build failure of hadoop-pipes on CentOS 8
[ https://issues.apache.org/jira/browse/HADOOP-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Masatake Iwasaki updated HADOOP-16739: -- Attachment: HADOOP-16739-branch-3.2.001.patch > Fix native build failure of hadoop-pipes on CentOS 8 > > > Key: HADOOP-16739 > URL: https://issues.apache.org/jira/browse/HADOOP-16739 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/pipes >Affects Versions: 2.10.0, 3.2.1 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Fix For: 3.3.0, 2.10.1 > > Attachments: HADOOP-16739-branch-2.10.001.patch, > HADOOP-16739-branch-3.2.001.patch, HADOOP-16739.001.patch > > > Native build fails in hadoop-tools/hadoop-pips on CentOS 8 due to lack of > rpc.h which was removed from glibc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16739) Fix native build failure of hadoop-pipes on CentOS 8
[ https://issues.apache.org/jira/browse/HADOOP-16739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091187#comment-17091187 ] Masatake Iwasaki commented on HADOOP-16739: --- backporting this to branch-3.2 by just replacing the version of protocol buffers (s/3.7.1/2.5.0/) in BUILDING.txt. > Fix native build failure of hadoop-pipes on CentOS 8 > > > Key: HADOOP-16739 > URL: https://issues.apache.org/jira/browse/HADOOP-16739 > Project: Hadoop Common > Issue Type: Improvement > Components: tools/pipes >Affects Versions: 2.10.0, 3.2.1 >Reporter: Masatake Iwasaki >Assignee: Masatake Iwasaki >Priority: Major > Fix For: 3.3.0, 2.10.1 > > Attachments: HADOOP-16739-branch-2.10.001.patch, > HADOOP-16739-branch-3.2.001.patch, HADOOP-16739.001.patch > > > Native build fails in hadoop-tools/hadoop-pips on CentOS 8 due to lack of > rpc.h which was removed from glibc. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not
[ https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bilahari T H updated HADOOP-17002: -- Resolution: Fixed Status: Resolved (was: Patch Available) > ABFS: Avoid storage calls to check if the account is HNS enabled or not > --- > > Key: HADOOP-17002 > URL: https://issues.apache.org/jira/browse/HADOOP-17002 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Fix For: 3.4.0 > > > Each time an FS instance is created a Getacl call is made. If the call fails > with 400 Bad request, the account is determined to be a non-HNS account. > Recommendation is to create a config and be able to avoid store calls to > determine account HNS status, > If config is available, use that to determine account HNS status. If config > is not present in core-site, default behaviour will be calling getAcl. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith commented on pull request #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
bilaharith commented on pull request #1969: URL: https://github.com/apache/hadoop/pull/1969#issuecomment-618798891 Closing this PR as Yetus repeatedly failing thought the local runs were succeeding. Creating a new PR. https://github.com/apache/hadoop/pull/1975. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik commented on a change in pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik commented on a change in pull request #1858: URL: https://github.com/apache/hadoop/pull/1858#discussion_r414280841 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TextFileBasedIdentityHandler.java ## @@ -0,0 +1,192 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.azurebfs.utils; + +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import java.io.File; +import java.io.IOException; +import java.util.HashMap; +import org.apache.commons.io.FileUtils; +import org.apache.commons.io.LineIterator; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.COLON; +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EMPTY_STRING; +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HASH; + + +/** + * {@code TextFileBasedIdentityHandler} is a {@link IdentityHandler} implements + * translation operation which returns identity mapped to AAD identity by + * loading the mapping file from the configured location. Location of the + * mapping file should be configured in {@code core-site.xml} + * + * User identity file should be delimited by colon in below format. + * + * OBJ_ID:USER_NAME:USER_ID:GROUP_ID:SPI_NAME:APP_ID Review comment: @steveloughran, clarification - you would want # with \ block or just #? ``` * Group identity file should be delimited by colon in below format. * * # OBJ_ID:GROUP_NAME:GROUP_ID:SGP_NAME * ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] amarnathkarthik commented on a change in pull request #1858: HDFS-15168: ABFS enhancement to translate AAD Object to Linux idenities
amarnathkarthik commented on a change in pull request #1858: URL: https://github.com/apache/hadoop/pull/1858#discussion_r414280841 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/utils/TextFileBasedIdentityHandler.java ## @@ -0,0 +1,192 @@ +/** + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.hadoop.fs.azurebfs.utils; + +import com.google.common.base.Preconditions; +import com.google.common.base.Strings; +import java.io.File; +import java.io.IOException; +import java.util.HashMap; +import org.apache.commons.io.FileUtils; +import org.apache.commons.io.LineIterator; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.COLON; +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.EMPTY_STRING; +import static org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HASH; + + +/** + * {@code TextFileBasedIdentityHandler} is a {@link IdentityHandler} implements + * translation operation which returns identity mapped to AAD identity by + * loading the mapping file from the configured location. Location of the + * mapping file should be configured in {@code core-site.xml} + * + * User identity file should be delimited by colon in below format. + * + * OBJ_ID:USER_NAME:USER_ID:GROUP_ID:SPI_NAME:APP_ID Review comment: @steveloughran, clarification - you would want # with block or just #? ``` * Group identity file should be delimited by colon in below format. * * # OBJ_ID:GROUP_NAME:GROUP_ID:SGP_NAME * ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] iwasakims commented on pull request #1976: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from th…
iwasakims commented on pull request #1976: URL: https://github.com/apache/hadoop/pull/1976#issuecomment-618787010 @jojochuang I got PluginExecutionExceptio in hadoop-maven-plugin when I applied this patch to branch-3.1. cherry-picking 832852ce4ff0 fixed the error. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091131#comment-17091131 ] Hadoop QA commented on HADOOP-17011: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 30s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 11s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 20s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 16s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 22m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 40s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 10s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 10s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 46s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 20s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 1s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 43s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 39s{color} | {color:green} hadoop-mapreduce-client-uploader in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 47s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}129m 2s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16912/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17011 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13001022/HADOOP-17011-003.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs
[jira] [Comment Edited] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091092#comment-17091092 ] Mingliang Liu edited comment on HADOOP-17007 at 4/24/20, 2:26 AM: -- {quote} Is it a compilation error on the 3.3.0 branch? {quote} Yes this is compile related issue, see the command [~aajisaka] is using to reproduce it. We see this in {{trunk}} as well. [~yuyang733] was (Author: liuml07): Yes this is compile related issue, see the command [~aajisaka] is using to reproduce it. We see this in {{trunk}} as well. [~yuyang733] > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Assignee: YangY >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HADOOP-17007: --- Target Version/s: 3.3.0, 3.4.0 (was: 3.3.0) > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Assignee: YangY >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY reassigned HADOOP-17007: -- Assignee: YangY > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Assignee: YangY >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091099#comment-17091099 ] YangY commented on HADOOP-17007: [~weichiu] [~liuml07] [~aajisaka] I will check the issue and fix it as soon as possible. > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091098#comment-17091098 ] Hadoop QA commented on HADOOP-17011: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 27s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 44s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 0s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 40s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 21m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 0m 41s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 55s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 21s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 52s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 52s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 59s{color} | {color:orange} root: The patch generated 1 new + 76 unchanged - 0 fixed = 77 total (was 76) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 6s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 9s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 21s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 40s{color} | {color:green} hadoop-mapreduce-client-uploader in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}132m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16911/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17011 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13001019/HADOOP-17011-002.patch | | Optional Tests | dupname asflicense compile
[jira] [Issue Comment Deleted] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] YangY updated HADOOP-17007: --- Comment: was deleted (was: Is it a compilation error on the 3.3.0 branch?) > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] yuyang733 commented on pull request #1952: HDFS-1820. FTPFileSystem attempts to close the outputstream even when it is not initialised.
yuyang733 commented on pull request #1952: URL: https://github.com/apache/hadoop/pull/1952#issuecomment-618762367 @mpryahin I will assist ChenSammi to check and fix this issue on the trunk branch as soon as possible. Thanks for your feedback. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091092#comment-17091092 ] Mingliang Liu commented on HADOOP-17007: Yes this is compile related issue, see the command [~aajisaka] is using to reproduce it. We see this in {{trunk}} as well. [~yuyang733] > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091089#comment-17091089 ] YangY commented on HADOOP-17007: Is it a compilation error on the 3.3.0 branch? > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091076#comment-17091076 ] Ctest commented on HADOOP-17011: Just uploaded the 003 patch to pass the checkstyle > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch, > HADOOP-17011-003.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Attachment: HADOOP-17011-003.patch > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch, > HADOOP-17011-003.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091072#comment-17091072 ] Hadoop QA commented on HADOOP-17010: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 37s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 28s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 20m 6s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 46s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 19m 26s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 19m 26s{color} | {color:red} root in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 43s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 19s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 4s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}121m 23s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16910/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17010 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13001014/HADOOP-17010.002.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 119fd6b61ccd 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16910/artifact/out/branch-compile-root.txt | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16910/artifact/out/patch-compile-root.txt | | javac |
[jira] [Commented] (HADOOP-17002) ABFS: Avoid storage calls to check if the account is HNS enabled or not
[ https://issues.apache.org/jira/browse/HADOOP-17002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091068#comment-17091068 ] Hudson commented on HADOOP-17002: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18177 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18177/]) HADOOP-17002. ABFS: Adding config to determine if the account is HNS (github: rev 30ef8d0f1a1463931fe581a46c739dad4c8260e4) * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums/Trilean.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/enums/package-info.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TrileanTests.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/contracts/exceptions/TrileanConversionException.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/ConfigurationKeys.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/constants/FileSystemConfigurations.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java * (edit) hadoop-tools/hadoop-azure/src/site/markdown/abfs.md * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestGetNameSpaceEnabled.java > ABFS: Avoid storage calls to check if the account is HNS enabled or not > --- > > Key: HADOOP-17002 > URL: https://issues.apache.org/jira/browse/HADOOP-17002 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.4.0 >Reporter: Bilahari T H >Assignee: Bilahari T H >Priority: Minor > Fix For: 3.4.0 > > > Each time an FS instance is created a Getacl call is made. If the call fails > with 400 Bad request, the account is determined to be a non-HNS account. > Recommendation is to create a config and be able to avoid store calls to > determine account HNS status, > If config is available, use that to determine account HNS status. If config > is not present in core-site, default behaviour will be calling getAcl. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091063#comment-17091063 ] Mingliang Liu commented on HADOOP-17011: Looks nice. # checkstyle is related. Compile error is reported and tracked by HADOOP-17007 [~ayushtkn] you think it's good? > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] aajisaka commented on pull request #1939: YARN-10223. Duplicate jersey-test-framework-core dependency in yarn-server-common
aajisaka commented on pull request #1939: URL: https://github.com/apache/hadoop/pull/1939#issuecomment-618738732 > What is the policy of PR commits? Is it okay to push the "Squash and merge" button and just close the jira afterwards? I think It's okay to push the "Squash and merge" button and just close the jira afterwards. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091039#comment-17091039 ] Ctest commented on HADOOP-17011: I just upload a new patch for those two places of "conf.get(FS_DEFAULT_NAME_KEY)". > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091038#comment-17091038 ] Hadoop QA commented on HADOOP-17011: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 17s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 36s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 54s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 2s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 11s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 9s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 53s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 18m 53s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 76 unchanged - 0 fixed = 77 total (was 76) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 34s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 38s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 35s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 29s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}116m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16909/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17011 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13001011/HADOOP-17011-001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a32159bbbc88 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | compile |
[jira] [Updated] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Attachment: HADOOP-17011-002.patch > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch, HADOOP-17011-002.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091020#comment-17091020 ] Ctest edited comment on HADOOP-17011 at 4/23/20, 11:35 PM: --- [~liuml07] I searched around in the hadoop-trunk and found one more place of `conf.get(FS_DEFAULT_NAME_KEY)`. I can change that one together in a new patch. And interestingly, in ServiceScheduler.java, the code is using `getTrimmed(FS_DEFAULT_NAME_KEY)`. was (Author: ctest.team): [~liuml07] I searched around in the hadoop-trunk and found one more place of `conf.get(FS_DEFAULT_NAME_KEY)`. I can change that one together in a new patch. Also, in ServiceScheduler.java, the code is using `getTrimmed(FS_DEFAULT_NAME_KEY)`. > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091020#comment-17091020 ] Ctest commented on HADOOP-17011: [~liuml07] I searched around in the hadoop-trunk and found one more place of `conf.get(FS_DEFAULT_NAME_KEY)`. I can change that one together in a new patch. Also, in ServiceScheduler.java, the code is using `getTrimmed(FS_DEFAULT_NAME_KEY)`. > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17091004#comment-17091004 ] Mingliang Liu commented on HADOOP-17007: CC: [~yuyang733] and [~Sammi] > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17010: Attachment: HADOOP-17010.002.patch > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-17010.001.patch, HADOOP-17010.002.patch > > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17011) Tolerate leading and trailing spaces in fs.defaultFS
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Summary: Tolerate leading and trailing spaces in fs.defaultFS (was: Trailing whitespace in fs.defaultFS will crash namenode and datanode) > Tolerate leading and trailing spaces in fs.defaultFS > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17009) Embrace Immutability of Java Collections
[ https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090999#comment-17090999 ] Mingliang Liu commented on HADOOP-17009: I like the idea. But wait, where is patch file? The compile failure is related to HADOOP-17007 > Embrace Immutability of Java Collections > > > Key: HADOOP-17009 > URL: https://issues.apache.org/jira/browse/HADOOP-17009 > Project: Hadoop Common > Issue Type: Improvement > Components: common >Reporter: David Mollitor >Assignee: David Mollitor >Priority: Minor > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090998#comment-17090998 ] Ctest edited comment on HADOOP-17011 at 4/23/20, 11:00 PM: --- [~ayushtkn] [~liuml07] Thank you for the reply! I can change the title to "Tolerate leading and trailing spaces in fs.defaultFS”. I think the log message here is not clear enough for this misconfiguration. It says: {code:java} java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 {code} But actually it has nothing to do with the "character in authority at index 7". It is all about the trailing space after "9000". This parameter is very frequently used and many freshmen users could make such a mistake. So using `getTrimmed` can prevent it from happening again. was (Author: ctest.team): [~ayushtkn] [~liuml07] Thank you for the reply! I can change the title to "Trailing whitespace in fs.defaultFS will prevent namenode and datanode from starting”, will it be better? I think the log message here is not clear enough for this misconfiguration. It says: {code:java} java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 {code} But actually it has nothing to do with the "character in authority at index 7". It is all about the trailing space after "9000". This parameter is very frequently used and many freshmen users could make such a mistake. So using `getTrimmed` can prevent it from happening again. > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090998#comment-17090998 ] Ctest edited comment on HADOOP-17011 at 4/23/20, 10:59 PM: --- [~ayushtkn] [~liuml07] Thank you for the reply! I can change the title to "Trailing whitespace in fs.defaultFS will prevent namenode and datanode from starting”, will it be better? I think the log message here is not clear enough for this misconfiguration. It says: {code:java} java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 {code} But actually it has nothing to do with the "character in authority at index 7". It is all about the trailing space after "9000". This parameter is very frequently used and many freshmen users could make such a mistake. So using `getTrimmed` can prevent it from happening again. was (Author: ctest.team): [~ayushtkn] Thank you for the reply! I can change the title to "Trailing whitespace in fs.defaultFS will prevent namenode and datanode from starting”, will it be better? I think the log message here is not clear enough for this misconfiguration. It says: {code:java} java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 {code} But actually it has nothing to do with the "character in authority at index 7". It is all about the trailing space after "9000". This parameter is very frequently used and many freshmen users could make such a mistake. So using `getTrimmed` can prevent it from happening again. > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu reassigned HADOOP-17011: -- Assignee: Ctest > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090998#comment-17090998 ] Ctest commented on HADOOP-17011: [~ayushtkn] Thank you for the reply! I can change the title to "Trailing whitespace in fs.defaultFS will prevent namenode and datanode from starting”, will it be better? I think the log message here is not clear enough for this misconfiguration. It says: {code:java} java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 {code} But actually it has nothing to do with the "character in authority at index 7". It is all about the trailing space after "9000". This parameter is very frequently used and many freshmen users could make such a mistake. So using `getTrimmed` can prevent it from happening again. > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Assignee: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090995#comment-17090995 ] Mingliang Liu commented on HADOOP-17011: For the patch itself, I'm +1 on it. I agree this is misconfiguration. Program will fail to start if misconfigured. The JIRA title can be "Tolerate leading and trailing spaces in fs.defaultFS". So, is there any other places in source code repo getting this value using `conf.get`? We can change all of them together in this patch, if any. Thanks, > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090979#comment-17090979 ] Ayush Saxena commented on HADOOP-17011: --- I must say the title of this JIRA is so misguiding, Namenode crash gives a feeling like Namenode dies in between post it has started working. Well, Can’t see this is a bug actually since it is giving the correct exception message and indeed the configuration is wrong. It can be seen as an improvement, if it is required. Any prominent reason for this? > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Description: *Problem:* Currently, `getDefaultUri` is using `conf.get` to get the value of `fs.defaultFS`, which means that the trailing whitespace after a valid URI won’t be removed and could stop namenode and datanode from starting up. *How to reproduce (Hadoop-2.8.5):* Set the configuration {code:java} fs.defaultFS hdfs://localhost:9000 {code} In core-site.xml (there is a whitespace after 9000) and start HDFS. Namenode and datanode won’t start and the log message is: {code:java} 2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI.create(URI.java:852) at org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.parseAuthority(URI.java:3186) at java.net.URI$Parser.parseHierarchical(URI.java:3097) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.(URI.java:588) at java.net.URI.create(URI.java:850) ... 5 more {code} *Solution:* Use `getTrimmed` instead of `get` for `fs.defaultFS`: {code:java} public static URI getDefaultUri(Configuration conf) { URI uri = URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); if (uri.getScheme() == null) { throw new IllegalArgumentException("No scheme in default FS: " + uri); } return uri; } {code} I have submitted a patch for trunk about this. was: *Problem:* Currently, `getDefaultUri` is using `conf.get` to get the value of `fs.defaultFS`, which means that the trailing whitespace after a valid URI won’t be removed and could stop namenode and datanode from starting up. *How to reproduce (Hadoop-2.8.5):* Set the configuration {code:java} fs.defaultFS hdfs://localhost:9000 {code} In core-site.xml (there is a whitespace after 9000) and start HDFS. Namenode and datanode won’t start and the log message is: {code:java} 2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI.create(URI.java:852) at org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.parseAuthority(URI.java:3186) at java.net.URI$Parser.parseHierarchical(URI.java:3097) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.(URI.java:588) at java.net.URI.create(URI.java:850) ... 5 more {code} *Solution:* Use `getTrimmed` instead of `get` for `fs.defaultFS`: {code:java} public static URI getDefaultUri(Configuration conf) { URI uri = URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); if (uri.getScheme() == null) { throw new IllegalArgumentException("No scheme in default FS: " + uri); } return uri; } {code} I have submitted a patch for trunk about this. > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > {code:java} > > fs.defaultFS >
[jira] [Updated] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Attachment: HADOOP-17011-001.patch Status: Patch Available (was: Open) > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > > {code:java} > > fs.defaultFS > hdfs://localhost:9000 > {code} > In core-site.xml (there is a whitespace after 9000) and start HDFS. > Namenode and datanode won’t start and the log message is: > {code:java} > 2020-04-23 11:09:48,198 ERROR > org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. > java.lang.IllegalArgumentException: Illegal character in authority at index > 7: hdfs://localhost:9000 > at java.net.URI.create(URI.java:852) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) > at > org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) > Caused by: java.net.URISyntaxException: Illegal character in authority at > index 7: hdfs://localhost:9000 > at java.net.URI$Parser.fail(URI.java:2848) > at java.net.URI$Parser.parseAuthority(URI.java:3186) > at java.net.URI$Parser.parseHierarchical(URI.java:3097) > at java.net.URI$Parser.parse(URI.java:3053) > at java.net.URI.(URI.java:588) > at java.net.URI.create(URI.java:850) > ... 5 more > {code} > > *Solution:* > Use `getTrimmed` instead of `get` for `fs.defaultFS`: > {code:java} > public static URI getDefaultUri(Configuration conf) { > URI uri = > URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); > if (uri.getScheme() == null) { > throw new IllegalArgumentException("No scheme in default FS: " + uri); > } > return uri; > } > {code} > I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
[ https://issues.apache.org/jira/browse/HADOOP-17011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ctest updated HADOOP-17011: --- Description: *Problem:* Currently, `getDefaultUri` is using `conf.get` to get the value of `fs.defaultFS`, which means that the trailing whitespace after a valid URI won’t be removed and could stop namenode and datanode from starting up. *How to reproduce (Hadoop-2.8.5):* Set the configuration {code:java} fs.defaultFS hdfs://localhost:9000 {code} In core-site.xml (there is a whitespace after 9000) and start HDFS. Namenode and datanode won’t start and the log message is: {code:java} 2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI.create(URI.java:852) at org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.parseAuthority(URI.java:3186) at java.net.URI$Parser.parseHierarchical(URI.java:3097) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.(URI.java:588) at java.net.URI.create(URI.java:850) ... 5 more {code} *Solution:* Use `getTrimmed` instead of `get` for `fs.defaultFS`: {code:java} public static URI getDefaultUri(Configuration conf) { URI uri = URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); if (uri.getScheme() == null) { throw new IllegalArgumentException("No scheme in default FS: " + uri); } return uri; } {code} I have submitted a patch for trunk about this. was: *Problem:* Currently, `getDefaultUri` is using `conf.get` to get the value of `fs.defaultFS`, which means that the trailing whitespace after a valid URI won’t be removed and could stop namenode and datanode from starting up. *How to reproduce (Hadoop-2.8.5):* Set the configuration {code:java} fs.defaultFS hdfs://localhost:9000 {code} In core-site.xml (there is a whitespace after 9000) and start HDFS. Namenode and datanode won’t start and the log message is: {code:java} 2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI.create(URI.java:852) at org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.parseAuthority(URI.java:3186) at java.net.URI$Parser.parseHierarchical(URI.java:3097) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.(URI.java:588) at java.net.URI.create(URI.java:850) ... 5 more {code} *Solution:* Use `getTrimmed` instead of `get` for `fs.defaultFS`: {code:java} public static URI getDefaultUri(Configuration conf) { URI uri = URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); if (uri.getScheme() == null) { throw new IllegalArgumentException("No scheme in default FS: " + uri); } return uri; } {code} I have submitted a patch for trunk about this. > Trailing whitespace in fs.defaultFS will crash namenode and datanode > > > Key: HADOOP-17011 > URL: https://issues.apache.org/jira/browse/HADOOP-17011 > Project: Hadoop Common > Issue Type: Bug > Components: common >Reporter: Ctest >Priority: Major > Attachments: HADOOP-17011-001.patch > > > *Problem:* > Currently, `getDefaultUri` is using `conf.get` to get the value of > `fs.defaultFS`, which means that the trailing whitespace after a valid URI > won’t be removed and could stop namenode and datanode from starting up. > > *How to reproduce (Hadoop-2.8.5):* > Set the configuration > > {code:java} > >
[jira] [Created] (HADOOP-17011) Trailing whitespace in fs.defaultFS will crash namenode and datanode
Ctest created HADOOP-17011: -- Summary: Trailing whitespace in fs.defaultFS will crash namenode and datanode Key: HADOOP-17011 URL: https://issues.apache.org/jira/browse/HADOOP-17011 Project: Hadoop Common Issue Type: Bug Components: common Reporter: Ctest *Problem:* Currently, `getDefaultUri` is using `conf.get` to get the value of `fs.defaultFS`, which means that the trailing whitespace after a valid URI won’t be removed and could stop namenode and datanode from starting up. *How to reproduce (Hadoop-2.8.5):* Set the configuration {code:java} fs.defaultFS hdfs://localhost:9000 {code} In core-site.xml (there is a whitespace after 9000) and start HDFS. Namenode and datanode won’t start and the log message is: {code:java} 2020-04-23 11:09:48,198 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode. java.lang.IllegalArgumentException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI.create(URI.java:852) at org.apache.hadoop.hdfs.server.namenode.NameNode.setClientNamenodeAddress(NameNode.java:440) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:897) at org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:885) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1626) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1694) Caused by: java.net.URISyntaxException: Illegal character in authority at index 7: hdfs://localhost:9000 at java.net.URI$Parser.fail(URI.java:2848) at java.net.URI$Parser.parseAuthority(URI.java:3186) at java.net.URI$Parser.parseHierarchical(URI.java:3097) at java.net.URI$Parser.parse(URI.java:3053) at java.net.URI.(URI.java:588) at java.net.URI.create(URI.java:850) ... 5 more {code} *Solution:* Use `getTrimmed` instead of `get` for `fs.defaultFS`: {code:java} public static URI getDefaultUri(Configuration conf) { URI uri = URI.create(fixName(conf.getTrimmed(FS_DEFAULT_NAME_KEY, DEFAULT_FS))); if (uri.getScheme() == null) { throw new IllegalArgumentException("No scheme in default FS: " + uri); } return uri; } {code} I have submitted a patch for trunk about this. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1976: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from th…
hadoop-yetus commented on pull request #1976: URL: https://github.com/apache/hadoop/pull/1976#issuecomment-618681834 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 11m 40s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-3.1 Compile Tests _ | | +0 :ok: | mvndep | 0m 27s | Maven dependency ordering for branch | | -1 :x: | mvninstall | 1m 34s | root in branch-3.1 failed. | | -1 :x: | compile | 0m 57s | root in branch-3.1 failed. | | -1 :x: | mvnsite | 0m 17s | hadoop-client-runtime in branch-3.1 failed. | | -1 :x: | shadedclient | 4m 20s | branch has errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 10s | hadoop-client-runtime in branch-3.1 failed. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 20s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 9s | hadoop-client-runtime in the patch failed. | | -1 :x: | compile | 0m 25s | root in the patch failed. | | -1 :x: | javac | 0m 25s | root in the patch failed. | | -1 :x: | mvnsite | 0m 9s | hadoop-client-runtime in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 3s | The patch has no ill-formed XML file. | | -1 :x: | shadedclient | 0m 37s | patch has errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 10s | hadoop-client-runtime in the patch failed. | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 10s | hadoop-project in the patch passed. | | -1 :x: | unit | 0m 9s | hadoop-client-runtime in the patch failed. | | +1 :green_heart: | asflicense | 0m 22s | The patch does not generate ASF License warnings. | | | | 21m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1976 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 5aa7aa65ea71 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.1 / 03ff1d3 | | Default Java | Oracle Corporation-9-internal+0-2016-04-14-195246.buildd.src | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/branch-mvninstall-root.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/branch-compile-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/branch-mvnsite-hadoop-client-modules_hadoop-client-runtime.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/branch-javadoc-hadoop-client-modules_hadoop-client-runtime.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-mvninstall-hadoop-client-modules_hadoop-client-runtime.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-compile-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-mvnsite-hadoop-client-modules_hadoop-client-runtime.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-javadoc-hadoop-client-modules_hadoop-client-runtime.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/artifact/out/patch-unit-hadoop-client-modules_hadoop-client-runtime.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/testReport/ | | Max. process+thread count | 127 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-client-modules/hadoop-client-runtime U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1976/1/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1973: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches.
hadoop-yetus commented on pull request #1973: URL: https://github.com/apache/hadoop/pull/1973#issuecomment-618681311 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 25s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ branch-3.2 Compile Tests _ | | +0 :ok: | mvndep | 0m 18s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 21m 11s | branch-3.2 passed | | +1 :green_heart: | compile | 16m 27s | branch-3.2 passed | | +1 :green_heart: | mvnsite | 0m 52s | branch-3.2 passed | | +1 :green_heart: | shadedclient | 52m 43s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 50s | branch-3.2 passed | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 19s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 2m 33s | the patch passed | | +1 :green_heart: | compile | 15m 42s | the patch passed | | +1 :green_heart: | javac | 15m 42s | the patch passed | | +1 :green_heart: | mvnsite | 0m 53s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | -1 :x: | shadedclient | 13m 35s | patch has errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 52s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 0m 23s | hadoop-project in the patch passed. | | +1 :green_heart: | unit | 0m 26s | hadoop-client-runtime in the patch passed. | | +1 :green_heart: | asflicense | 0m 44s | The patch does not generate ASF License warnings. | | | | 93m 11s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1973/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1973 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux 5f8b1a87dbe7 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | branch-3.2 / 48f1c8f | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1973/2/testReport/ | | Max. process+thread count | 309 (vs. ulimit of 5500) | | modules | C: hadoop-project hadoop-client-modules/hadoop-client-runtime U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1973/2/console | | versions | git=2.7.4 maven=3.3.9 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang opened a new pull request #1976: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from th…
jojochuang opened a new pull request #1976: URL: https://github.com/apache/hadoop/pull/1976 Clean backport This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] dhirajh commented on pull request #1964: HDFS-15281: Make sure ZKFC uses dfs.namenode.rpc-address to bind to host address
dhirajh commented on pull request #1964: URL: https://github.com/apache/hadoop/pull/1964#issuecomment-618663219 I have > > I'm a little confused with the @hadoop-yetus report... it looks like is checking old code. > > Yes same here. CC: @aajisaka @hadoop-yetus Basically after the author updated the PR, the Yetus was still generating reports using old patch / commits. > > One way to confirm is that: > > 1. The new [build](https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/4/console) was triggered by and generated for the recent commit > 2. The two recent commits have [deleted](https://github.com/apache/hadoop/pull/1964/commits/d8b4db1fc0e59c183d0725e3754f1d8d0085f115) a source code file it added in its initial commit > 3. The checkstyle is still reporting that deleted source file. I have just uploaded the patch diff too, in case that helps kick off a better build. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1974: HADOOP-17009: Embrace Immutability of Java Collections
hadoop-yetus commented on pull request #1974: URL: https://github.com/apache/hadoop/pull/1974#issuecomment-618661444 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 5s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 24s | trunk passed | | -1 :x: | compile | 18m 1s | root in trunk failed. | | +1 :green_heart: | checkstyle | 0m 49s | trunk passed | | +1 :green_heart: | mvnsite | 1m 21s | trunk passed | | +1 :green_heart: | shadedclient | 17m 59s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 55s | trunk passed | | +0 :ok: | spotbugs | 2m 8s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 5s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | the patch passed | | -1 :x: | compile | 17m 21s | root in the patch failed. | | -1 :x: | javac | 17m 21s | root in the patch failed. | | -0 :warning: | checkstyle | 0m 48s | hadoop-common-project/hadoop-common: The patch generated 1 new + 326 unchanged - 3 fixed = 327 total (was 329) | | +1 :green_heart: | mvnsite | 1m 23s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 15s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 56s | the patch passed | | +1 :green_heart: | findbugs | 2m 15s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 10s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 113m 29s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1974 | | JIRA Issue | HADOOP-17009 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux eace52f3e6a4 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/branch-compile-root.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/testReport/ | | Max. process+thread count | 3230 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17009) Embrace Immutability of Java Collections
[ https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090932#comment-17090932 ] Hadoop QA commented on HADOOP-17009: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 5s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 1s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 24s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 1s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 17m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 8s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 5s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 21s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 21s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 326 unchanged - 3 fixed = 327 total (was 329) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 10s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 29s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/3/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1974 | | JIRA Issue | HADOOP-17009 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux eace52f3e6a4 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | compile |
[GitHub] [hadoop] liuml07 commented on pull request #1964: HDFS-15281: Make sure ZKFC uses dfs.namenode.rpc-address to bind to host address
liuml07 commented on pull request #1964: URL: https://github.com/apache/hadoop/pull/1964#issuecomment-618643923 > I'm a little confused with the @hadoop-yetus report... it looks like is checking old code. Yes same here. CC: @aajisaka @hadoop-yetus Basically after the author updated the PR, the Yetus was still generating reports using old patch / commits. One way to confirm is that: 1. The new [build](https://builds.apache.org/job/hadoop-multibranch/job/PR-1964/4/console) was triggered by and generated for the recent commit 1. The two recent commits have [deleted](https://github.com/apache/hadoop/pull/1964/commits/d8b4db1fc0e59c183d0725e3754f1d8d0085f115) a source code file it added in its initial commit 1. The checkstyle is still reporting that deleted source file. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-16361: - Fix Version/s: 2.10.1 > TestSecureLogins#testValidKerberosName fails on branch-2 > > > Key: HADOOP-16361 > URL: https://issues.apache.org/jira/browse/HADOOP-16361 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.10.0, 2.9.2, 2.8.5 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Major > Fix For: 2.10.1 > > Attachments: HADOOP-16361-branch-2.10.001.patch, > HADOOP-16361-branch-2.10.002.patch > > > This test is failing in branch-2. > {noformat} > [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins > [ERROR] > testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins) > Time elapsed: 0.007 s <<< ERROR! > org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: > No rules applied to zookeeper/localhost > at > org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401) > at > org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16361) TestSecureLogins#testValidKerberosName fails on branch-2
[ https://issues.apache.org/jira/browse/HADOOP-16361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Brennan updated HADOOP-16361: - Resolution: Fixed Status: Resolved (was: Patch Available) > TestSecureLogins#testValidKerberosName fails on branch-2 > > > Key: HADOOP-16361 > URL: https://issues.apache.org/jira/browse/HADOOP-16361 > Project: Hadoop Common > Issue Type: Bug > Components: security >Affects Versions: 2.10.0, 2.9.2, 2.8.5 >Reporter: Jim Brennan >Assignee: Jim Brennan >Priority: Major > Fix For: 2.10.1 > > Attachments: HADOOP-16361-branch-2.10.001.patch, > HADOOP-16361-branch-2.10.002.patch > > > This test is failing in branch-2. > {noformat} > [ERROR] Tests run: 11, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: > 26.917 s <<< FAILURE! - in org.apache.hadoop.registry.secure.TestSecureLogins > [ERROR] > testValidKerberosName(org.apache.hadoop.registry.secure.TestSecureLogins) > Time elapsed: 0.007 s <<< ERROR! > org.apache.hadoop.security.authentication.util.KerberosName$NoMatchingRule: > No rules applied to zookeeper/localhost > at > org.apache.hadoop.security.authentication.util.KerberosName.getShortName(KerberosName.java:401) > at > org.apache.hadoop.registry.secure.TestSecureLogins.testValidKerberosName(TestSecureLogins.java:182) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at > org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] jojochuang commented on pull request #1973: HADOOP-16905. Update jackson-databind to 2.10.3 to relieve us from the endless CVE patches.
jojochuang commented on pull request #1973: URL: https://github.com/apache/hadoop/pull/1973#issuecomment-618628833 Triggered rebuild. The failure is unrelated, due to YARN-10063 which I reverted. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on pull request #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation
hadoop-yetus commented on pull request #1820: URL: https://github.com/apache/hadoop/pull/1820#issuecomment-618620734 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 3s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 20 new or modified test files. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 49s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 19m 3s | trunk passed | | -1 :x: | compile | 16m 54s | root in trunk failed. | | +1 :green_heart: | checkstyle | 2m 39s | trunk passed | | +1 :green_heart: | mvnsite | 2m 19s | trunk passed | | +1 :green_heart: | shadedclient | 20m 24s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 45s | trunk passed | | +0 :ok: | spotbugs | 1m 11s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 3m 13s | trunk passed | | -0 :warning: | patch | 1m 35s | Used diff version of patch file. Binary files and potentially other changes not applied. Please rebase and squash commits if necessary. | ||| _ Patch Compile Tests _ | | +0 :ok: | mvndep | 0m 23s | Maven dependency ordering for patch | | +1 :green_heart: | mvninstall | 1m 24s | the patch passed | | -1 :x: | compile | 16m 18s | root in the patch failed. | | -1 :x: | javac | 16m 18s | root in the patch failed. | | -0 :warning: | checkstyle | 2m 43s | root: The patch generated 60 new + 100 unchanged - 19 fixed = 160 total (was 119) | | +1 :green_heart: | mvnsite | 2m 18s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 2s | The patch has no ill-formed XML file. | | +1 :green_heart: | shadedclient | 14m 6s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 1m 45s | the patch passed | | +1 :green_heart: | findbugs | 3m 31s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 25s | hadoop-common in the patch passed. | | -1 :x: | unit | 1m 37s | hadoop-aws in the patch passed. | | +1 :green_heart: | asflicense | 0m 54s | The patch does not generate ASF License warnings. | | | | 122m 46s | | | Reason | Tests | |---:|:--| | Failed junit tests | hadoop.fs.s3a.impl.TestNetworkBinding | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1820 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 7e6af1c459b0 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/branch-compile-root.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/diff-checkstyle-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/artifact/out/patch-unit-hadoop-tools_hadoop-aws.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/testReport/ | | Max. process+thread count | 2548 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws U: . | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1820/20/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL
[GitHub] [hadoop] hadoop-yetus commented on pull request #1975: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
hadoop-yetus commented on pull request #1975: URL: https://github.com/apache/hadoop/pull/1975#issuecomment-618616054 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 13s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 2 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 24m 3s | trunk passed | | +1 :green_heart: | compile | 0m 30s | trunk passed | | +1 :green_heart: | checkstyle | 0m 21s | trunk passed | | +1 :green_heart: | mvnsite | 0m 36s | trunk passed | | +1 :green_heart: | shadedclient | 16m 49s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 24s | trunk passed | | +0 :ok: | spotbugs | 0m 57s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 55s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 30s | the patch passed | | +1 :green_heart: | compile | 0m 25s | the patch passed | | +1 :green_heart: | javac | 0m 25s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 29s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 45s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | +1 :green_heart: | findbugs | 1m 0s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 18s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 67m 0s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1975 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux c34fc51f8fb1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/2/testReport/ | | Max. process+thread count | 308 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on a change in pull request #1974: HADOOP-17009: Embrace Immutability of Java Collections
goiri commented on a change in pull request #1974: URL: https://github.com/apache/hadoop/pull/1974#discussion_r414009203 ## File path: hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/CompositeGroupsMapping.java ## @@ -78,15 +79,13 @@ user, provider.getClass().getSimpleName(), e.toString()); LOG.debug("Stacktrace: ", e); } - if (groups != null && ! groups.isEmpty()) { + if (!groups.isEmpty()) { groupSet.addAll(groups); if (!combined) break; } } -List results = new ArrayList(groupSet.size()); -results.addAll(groupSet); -return results; +return Collections.unmodifiableList(new ArrayList<>(groupSet)); Review comment: Now, somebody calling getGroups() would start failing if it was adding new things to the list while that was not the case before. I've tried using inmodifiable before but is kind of dangerous. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] goiri commented on issue #1964: HDFS-15281: Make sure ZKFC uses dfs.namenode.rpc-address to bind to host address
goiri commented on issue #1964: URL: https://github.com/apache/hadoop/pull/1964#issuecomment-618553348 I'm a little confused with the @hadoop-yetus report... it looks like is checking old code. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1975: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
hadoop-yetus commented on issue #1975: URL: https://github.com/apache/hadoop/pull/1975#issuecomment-618552122 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 9s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 21m 24s | trunk passed | | +1 :green_heart: | compile | 0m 27s | trunk passed | | +1 :green_heart: | checkstyle | 0m 19s | trunk passed | | +1 :green_heart: | mvnsite | 0m 29s | trunk passed | | +1 :green_heart: | shadedclient | 16m 16s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | trunk passed | | +0 :ok: | spotbugs | 0m 50s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 48s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 22s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 22s | hadoop-azure in the patch failed. | | +1 :green_heart: | checkstyle | 0m 14s | the patch passed | | -1 :x: | mvnsite | 0m 23s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 46s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 20s | the patch passed | | -1 :x: | findbugs | 0m 25s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 26s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 27s | The patch does not generate ASF License warnings. | | | | 61m 33s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1975 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 56700cb0b1ff 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/testReport/ | | Max. process+thread count | 307 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1975/1/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17009) Embrace Immutability of Java Collections
[ https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090793#comment-17090793 ] David Mollitor commented on HADOOP-17009: - {code:none} [INFO] Executed tasks [INFO] [INFO] --- maven-dependency-plugin:3.0.2:copy-dependencies (package) @ hadoop-cos --- [INFO] Copying junit-4.12.jar to /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1974/src/hadoop-cloud-storage-project/hadoop-cos/target/lib/junit-4.12.jar [INFO] Copying hamcrest-core-1.3.jar to /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1974/src/hadoop-cloud-storage-project/hadoop-cos/target/lib/hamcrest-core-1.3.jar [INFO] Copying cos_api-bundle-5.6.19.jar to /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1974/src/hadoop-cloud-storage-project/hadoop-cos/target/lib/cos_api-bundle-5.6.19.jar [INFO] Copying classes to /home/jenkins/jenkins-slave/workspace/hadoop-multibranch_PR-1974/src/hadoop-cloud-storage-project/hadoop-cos/target/lib/hadoop-common-3.4.0-SNAPSHOT.jar [INFO] [INFO] Reactor Summary: [INFO] [INFO] Apache Hadoop Main . SUCCESS [ 0.324 s] [INFO] Apache Hadoop Build Tools .. SUCCESS [ 2.147 s] [INFO] Apache Hadoop Project POM .. SUCCESS [ 0.565 s] [INFO] Apache Hadoop Annotations .. SUCCESS [ 1.540 s] [INFO] Apache Hadoop Project Dist POM . SUCCESS [ 0.109 s] [INFO] Apache Hadoop Assemblies ... SUCCESS [ 0.169 s] [INFO] Apache Hadoop Maven Plugins SUCCESS [ 4.428 s] [INFO] Apache Hadoop MiniKDC .. SUCCESS [ 4.027 s] [INFO] Apache Hadoop Auth . SUCCESS [ 7.716 s] [INFO] Apache Hadoop Auth Examples SUCCESS [ 1.370 s] [INFO] Apache Hadoop Common ... SUCCESS [ 41.696 s] [INFO] Apache Hadoop NFS .. SUCCESS [ 6.527 s] [INFO] Apache Hadoop KMS .. SUCCESS [ 6.748 s] [INFO] Apache Hadoop Registry . SUCCESS [ 6.126 s] [INFO] Apache Hadoop Common Project ... SUCCESS [ 0.045 s] [INFO] Apache Hadoop HDFS Client .. SUCCESS [ 32.543 s] [INFO] Apache Hadoop HDFS . SUCCESS [ 51.249 s] [INFO] Apache Hadoop HDFS Native Client ... SUCCESS [01:45 min] [INFO] Apache Hadoop HttpFS ... SUCCESS [ 9.273 s] [INFO] Apache Hadoop HDFS-NFS . SUCCESS [ 5.737 s] [INFO] Apache Hadoop HDFS-RBF . SUCCESS [ 17.993 s] [INFO] Apache Hadoop HDFS Project . SUCCESS [ 0.055 s] [INFO] Apache Hadoop YARN . SUCCESS [ 0.042 s] [INFO] Apache Hadoop YARN API . SUCCESS [ 22.817 s] [INFO] Apache Hadoop YARN Common .. SUCCESS [ 26.990 s] [INFO] Apache Hadoop YARN Server .. SUCCESS [ 0.041 s] [INFO] Apache Hadoop YARN Server Common ... SUCCESS [ 17.056 s] [INFO] Apache Hadoop YARN NodeManager . SUCCESS [ 49.390 s] [INFO] Apache Hadoop YARN Web Proxy ... SUCCESS [ 5.383 s] [INFO] Apache Hadoop YARN ApplicationHistoryService ... SUCCESS [ 9.187 s] [INFO] Apache Hadoop YARN Timeline Service SUCCESS [ 7.032 s] [INFO] Apache Hadoop YARN ResourceManager . SUCCESS [ 30.968 s] [INFO] Apache Hadoop YARN Server Tests SUCCESS [ 5.771 s] [INFO] Apache Hadoop YARN Client .. SUCCESS [ 11.756 s] [INFO] Apache Hadoop YARN SharedCacheManager .. SUCCESS [ 5.344 s] [INFO] Apache Hadoop YARN Timeline Plugin Storage . SUCCESS [ 5.899 s] [INFO] Apache Hadoop YARN TimelineService HBase Backend ... SUCCESS [ 0.038 s] [INFO] Apache Hadoop YARN TimelineService HBase Common SUCCESS [ 6.171 s] [INFO] Apache Hadoop YARN TimelineService HBase Client SUCCESS [ 6.749 s] [INFO] Apache Hadoop YARN TimelineService HBase Servers ... SUCCESS [ 0.033 s] [INFO] Apache Hadoop YARN TimelineService HBase Server 1.2 SUCCESS [ 3.271 s] [INFO] Apache Hadoop YARN TimelineService HBase tests . SUCCESS [ 7.757 s] [INFO] Apache Hadoop YARN Router .. SUCCESS [ 7.290 s] [INFO] Apache Hadoop YARN TimelineService DocumentStore ... SUCCESS [ 4.418 s] [INFO] Apache Hadoop YARN Applications SUCCESS [ 0.038 s] [INFO] Apache Hadoop YARN DistributedShell SUCCESS [ 5.828 s] [INFO] Apache Hadoop YARN Unmanaged Am Launcher ... SUCCESS [ 3.660 s] [INFO] Apache Hadoop
[GitHub] [hadoop] szilard-nemeth edited a comment on issue #1939: YARN-10223. Duplicate jersey-test-framework-core dependency in yarn-server-common
szilard-nemeth edited a comment on issue #1939: URL: https://github.com/apache/hadoop/pull/1939#issuecomment-618506427 Hi @aajisaka , Change makes sense, LGTM. Do you need other reviews or may I commit this? Btw, side question: What is the policy of PR commits? Is it okay to push the "Squash and merge" button and just close the jira afterwards? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] szilard-nemeth commented on issue #1939: YARN-10223. Duplicate jersey-test-framework-core dependency in yarn-server-common
szilard-nemeth commented on issue #1939: URL: https://github.com/apache/hadoop/pull/1939#issuecomment-618506427 Hi @aajisaka , Change makes sense, LGTM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] bilaharith opened a new pull request #1975: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
bilaharith opened a new pull request #1975: URL: https://github.com/apache/hadoop/pull/1975 Each time an FS instance is created a Getacl call is made. If the call fails with 400 Bad request, the account is determined to be a non-HNS account. Recommendation is to create a config and be able to avoid store calls to determine account HNS status, If config is available, use that to determine account HNS status. If config is not present in core-site, default behaviour will be calling getAcl. **Driver test results using accounts in Central India** mvn -T 1C -Dparallel-tests=abfs -Dscale -DtestsThreadCount=8 clean verify **Account with HNS Support** [INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 421, Failures: 0, Errors: 0, Skipped: 66 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 **Account without HNS support** [INFO] Tests run: 56, Failures: 0, Errors: 0, Skipped: 0 [WARNING] Tests run: 421, Failures: 0, Errors: 0, Skipped: 240 [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 24 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1974: HADOOP-17009: Embrace Immutability of Java Collections
hadoop-yetus commented on issue #1974: URL: https://github.com/apache/hadoop/pull/1974#issuecomment-618496312 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 8s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 22m 14s | trunk passed | | -1 :x: | compile | 18m 34s | root in trunk failed. | | +1 :green_heart: | checkstyle | 0m 49s | trunk passed | | +1 :green_heart: | mvnsite | 1m 23s | trunk passed | | +1 :green_heart: | shadedclient | 18m 0s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 54s | trunk passed | | +0 :ok: | spotbugs | 2m 9s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 2m 7s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 51s | the patch passed | | -1 :x: | compile | 17m 11s | root in the patch failed. | | -1 :x: | javac | 17m 11s | root in the patch failed. | | -0 :warning: | checkstyle | 0m 48s | hadoop-common-project/hadoop-common: The patch generated 1 new + 326 unchanged - 3 fixed = 327 total (was 329) | | +1 :green_heart: | mvnsite | 1m 25s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 15m 27s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 53s | the patch passed | | +1 :green_heart: | findbugs | 2m 16s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 9m 8s | hadoop-common in the patch passed. | | +1 :green_heart: | asflicense | 0m 45s | The patch does not generate ASF License warnings. | | | | 114m 57s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1974 | | JIRA Issue | HADOOP-17009 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 98e69cbf294c 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/branch-compile-root.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/patch-compile-root.txt | | checkstyle | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/testReport/ | | Max. process+thread count | 2763 (vs. ulimit of 5500) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17009) Embrace Immutability of Java Collections
[ https://issues.apache.org/jira/browse/HADOOP-17009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090734#comment-17090734 ] Hadoop QA commented on HADOOP-17009: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 8s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 14s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 34s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 9s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 11s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 11s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 48s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 326 unchanged - 3 fixed = 327 total (was 329) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 16s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 8s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 45s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}114m 57s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1974/2/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1974 | | JIRA Issue | HADOOP-17009 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 98e69cbf294c 4.15.0-91-generic #92-Ubuntu SMP Fri Feb 28 11:09:48 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 459eb2a | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | compile |
[GitHub] [hadoop] mukund-thakur commented on a change in pull request #1820: HADOOP-16830. Add public IOStatistics API + S3A implementation
mukund-thakur commented on a change in pull request #1820: URL: https://github.com/apache/hadoop/pull/1820#discussion_r413912067 ## File path: hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInstrumentation.java ## @@ -622,7 +642,8 @@ public void close() { throttleRateQuantile.stop(); s3GuardThrottleRateQuantile.stop(); metricsSystem.unregisterSource(metricsSourceName); - int activeSources = --metricsSourceActiveCounter; + metricsSourceActiveCounter--; Review comment: This was correct as well though a bit confusing. I think that is why you changed it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16921) NPE in s3a byte buffer block upload
[ https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090707#comment-17090707 ] Steve Loughran commented on HADOOP-16921: - happy for you to take up. * anything related to mark/position in the AWS SDK is usually caused by upload failures and retries, usually very intermittent and so hard o track down. * I'd start with adding some verifyOpen() checks on this and related methods, and add a check for the argument being null. Maybe a null pointer came back from the bytebuffer factory in some failure mode > NPE in s3a byte buffer block upload > --- > > Key: HADOOP-16921 > URL: https://issues.apache.org/jira/browse/HADOOP-16921 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16921) NPE in s3a byte buffer block upload
[ https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090702#comment-17090702 ] Mukund Thakur commented on HADOOP-16921: Can I pick this up? Or you have already started working on this ? [~ste...@apache.org] > NPE in s3a byte buffer block upload > --- > > Key: HADOOP-16921 > URL: https://issues.apache.org/jira/browse/HADOOP-16921 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16999) ABFS: Reuse DSAS fetched in ABFS Input and Output stream
[ https://issues.apache.org/jira/browse/HADOOP-16999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090662#comment-17090662 ] Sandeep More commented on HADOOP-16999: --- [~snvijaya] 1 min seems to be a bit too small for renewal don't you think? > ABFS: Reuse DSAS fetched in ABFS Input and Output stream > > > Key: HADOOP-16999 > URL: https://issues.apache.org/jira/browse/HADOOP-16999 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > > This Jira will track the update where ABFS input and output streams can > re-use D-SAS token fetched. If the SAS is within 1 minute of expiry, ABFS > will request a new SAS. When the stream is closed the SAS will be released. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1923: Hadoop 16857. ABFS: Stop CustomTokenProvider retry logic to depend on AbfsRestOp retry policy
steveloughran commented on issue #1923: URL: https://github.com/apache/hadoop/pull/1923#issuecomment-618409067 hey, can people remember to update the JIRA when something is merged. Easily forgotten, but it's critical. cherrypicked to branch-3.3 *and* closed the JIRA This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
steveloughran commented on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618407786 +1, merged to trunk please don't do rebase/amend/force push again. thanks This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Resolved] (HADOOP-16914) Adding Output Stream Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran resolved HADOOP-16914. - Fix Version/s: 3.3.0 Resolution: Fixed > Adding Output Stream Counters in ABFS > - > > Key: HADOOP-16914 > URL: https://issues.apache.org/jira/browse/HADOOP-16914 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > Fix For: 3.3.0 > > > AbfsOutputStream does not have any counters that can be populated or referred > to when needed for finding bottlenecks in that area. > purpose: > * Create an interface and Implementation class for all the AbfsOutputStream > counters. > * populate the counters in AbfsOutputStream in appropriate places. > * Override the toString() to see counters in logs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16857) ABFS: Optimize HttpRequest retry triggers
[ https://issues.apache.org/jira/browse/HADOOP-16857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran updated HADOOP-16857: Fix Version/s: 3.3.1 Resolution: Fixed Status: Resolved (was: Patch Available) > ABFS: Optimize HttpRequest retry triggers > - > > Key: HADOOP-16857 > URL: https://issues.apache.org/jira/browse/HADOOP-16857 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/azure >Affects Versions: 3.3.1 >Reporter: Sneha Vijayarajan >Assignee: Sneha Vijayarajan >Priority: Major > Fix For: 3.3.1 > > > Currently retry logic gets triggered when access token fetch fails even with > irrecoverable errors. Causing a large wait time for the request failure to be > reported. > > Retry logic needs to be optimized to identify such access token fetch > failures and fail fast. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Assigned] (HADOOP-16921) NPE in s3a byte buffer block upload
[ https://issues.apache.org/jira/browse/HADOOP-16921?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Steve Loughran reassigned HADOOP-16921: --- Assignee: (was: Steve Loughran) > NPE in s3a byte buffer block upload > --- > > Key: HADOOP-16921 > URL: https://issues.apache.org/jira/browse/HADOOP-16921 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Affects Versions: 3.3.0 >Reporter: Steve Loughran >Priority: Minor > > NPE in s3a upload when fs.s3a.fast.upload.buffer = bytebuffer -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16193) add extra S3A MPU test to see what happens if a file is created during the MPU
[ https://issues.apache.org/jira/browse/HADOOP-16193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-16193: Fix Version/s: (was: 3.1.4) 3.1.5 > add extra S3A MPU test to see what happens if a file is created during the MPU > -- > > Key: HADOOP-16193 > URL: https://issues.apache.org/jira/browse/HADOOP-16193 > Project: Hadoop Common > Issue Type: Sub-task > Components: fs/s3 >Reporter: Steve Loughran >Assignee: Steve Loughran >Priority: Minor > Fix For: 3.1.5 > > > Proposed extra test for the S3A MPU: if you create and then delete a file > while an MPU is in progress, when you finally complete the MPU the new data > is present. > This verifies that the other FS operations don't somehow cancel the > in-progress upload, and that eventual consistency brings the latest value out. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-16917) Update dependency in branch-3.1
[ https://issues.apache.org/jira/browse/HADOOP-16917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Gabor Bota updated HADOOP-16917: Target Version/s: 3.1.5 (was: 3.1.4) > Update dependency in branch-3.1 > --- > > Key: HADOOP-16917 > URL: https://issues.apache.org/jira/browse/HADOOP-16917 > Project: Hadoop Common > Issue Type: Improvement > Components: build, fs/s3 >Affects Versions: 3.1.4 >Reporter: Wei-Chiu Chuang >Priority: Blocker > Labels: release-blocker > Attachments: dependency-check-report.html > > > Jackson-databind 2.9.10.3 --> 2.10.3 > Zookeeper 3.4.13 --> 3.4.14 > hbase-client 1.2.6 --> 1.2.6.1 > aws-java-sdk-bundle 1.11.271 --> 1.11.563? (this is the version used by trunk) -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16914) Adding Output Stream Counters in ABFS
[ https://issues.apache.org/jira/browse/HADOOP-16914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090593#comment-17090593 ] Hudson commented on HADOOP-16914: - SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18175 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18175/]) HADOOP-16914 Adding Output Stream Counters in ABFS (#1899) (github: rev 459eb2ad6d5bc6b21462e728fb334c6e30e14c39) * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamContext.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatisticsImpl.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/TestAbfsOutputStreamStatistics.java * (edit) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStream.java * (add) hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/services/AbfsOutputStreamStatistics.java * (edit) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/AbstractAbfsIntegrationTest.java * (add) hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAbfsOutputStreamStatistics.java > Adding Output Stream Counters in ABFS > - > > Key: HADOOP-16914 > URL: https://issues.apache.org/jira/browse/HADOOP-16914 > Project: Hadoop Common > Issue Type: Improvement > Components: fs/azure >Affects Versions: 3.2.1 >Reporter: Mehakmeet Singh >Assignee: Mehakmeet Singh >Priority: Major > > AbfsOutputStream does not have any counters that can be populated or referred > to when needed for finding bottlenecks in that area. > purpose: > * Create an interface and Implementation class for all the AbfsOutputStream > counters. > * populate the counters in AbfsOutputStream in appropriate places. > * Override the toString() to see counters in logs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
hadoop-yetus commented on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618383301 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 34s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 18s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 14m 51s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 0m 51s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 49s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 17s | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 2s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | +1 :green_heart: | findbugs | 0m 53s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 23s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 57m 39s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/15/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1899 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 10e816d6c9d8 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5958af4 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/15/testReport/ | | Max. process+thread count | 422 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/15/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
steveloughran edited a comment on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618374352 > Sorry for the force push, I had done the -amend and rebased it afterwards. So, wasn't able to go back to the previous HEAD to have the commit and unstage changes. why the -amend? Why not just add another cvhange. Please don't rebase once reviewing has started, as it becomes impossible to tie discussions back to the state of the patch at the time, or see what changes happened after. For example, a commit called "fix review changes" Which review? I can't see from the history any more Until other people start reviewing -go for it. Once it's begun, if you do need to reset everything it is better to start again with a whole new PR with the single history squashed ``` git diff trunk...HEAD > history.diff git co trunk git co -b new-branch git apply -3 --verbose --whitespace=fix history.diff ..etc. etc ``` Key point; rebasing makes reviewing significantly harder, the harder a patch is to review, the fewer reviews it gets. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran edited a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
steveloughran edited a comment on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618374352 i> Sorry for the force push, I had done the -amend and rebased it afterwards. So, wasn't able to go back to the previous HEAD to have the commit and unstage changes. why the -amend? Why not just add another cvhange. Please don't rebase once reviewing has started, as it becomes impossible to tie discussions back to the state of the patch at the time, or see what changes happened after. For example, a commit called "fix review changes" Which review? I can't see from the history any more Until other people start reviewing -go for it. Once it's begun, if you do need to reset everything it is better to start again with a whole new PR with the single history squashed ``` git diff trunk...HEAD > history.diff git co trunk git co -b new-branch git apply -3 --verbose --whitespace=fix history.diff ..etc. etc ``` Key point; rebasing makes reviewing significantly harder, the harder a patch is to review, the fewer reviews it gets. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
steveloughran commented on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618374352 Hi> Sorry for the force push, I had done the -amend and rebased it afterwards. So, wasn't able to go back to the previous HEAD to have the commit and unstage changes. why the -amend? Why not just add another cvhange. Please don't rebase once reviewing has started, as it becomes impossible to tie discussions back to the state of the patch at the time, or see what changes happened after. For example, a commit called "fix review changes" Which review? I can't see from the history any more Until other people start reviewing -go for it. Once it's begun, if you do need to reset everything it is better to start again with a whole new PR with the single history squashed ``` git diff trunk...HEAD > history.diff git co trunk git co -b new-branch git apply -3 --verbose --whitespace=fix history.diff ..etc. etc ``` Key point; rebasing makes reviewing significantly harder, the harder a patch is to review, the fewer reviews it gets. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] mehakmeet commented on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
mehakmeet commented on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-618357119 Sorry for the force push, I had done the -amend and rebased it afterwards. So, wasn't able to go back to the previous HEAD to have the commit and unstage changes. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-16886) Add hadoop.http.idle_timeout.ms to core-default.xml
[ https://issues.apache.org/jira/browse/HADOOP-16886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090497#comment-17090497 ] Ayush Saxena commented on HADOOP-16886: --- Thanx [~leosun08] for the confirmation. v002 LGTM +1 Will commit by EOD, if no further comments. > Add hadoop.http.idle_timeout.ms to core-default.xml > --- > > Key: HADOOP-16886 > URL: https://issues.apache.org/jira/browse/HADOOP-16886 > Project: Hadoop Common > Issue Type: Improvement >Affects Versions: 3.2.0, 3.0.4, 3.1.2 >Reporter: Wei-Chiu Chuang >Assignee: Lisheng Sun >Priority: Major > Attachments: HADOOP-16886-001.patch, HADOOP-16886.002.patch > > > HADOOP-15696 made the http server connection idle time configurable > (hadoop.http.idle_timeout.ms). > This configuration key is added to kms-default.xml and httpfs-default.xml but > we missed it in core-default.xml. We should add it there because NNs/JNs/DNs > also use it too. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] steveloughran commented on a change in pull request #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
steveloughran commented on a change in pull request #1899: URL: https://github.com/apache/hadoop/pull/1899#discussion_r413708902 ## File path: hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AzureBlobFileSystemStore.java ## @@ -51,6 +51,7 @@ import com.google.common.annotations.VisibleForTesting; import com.google.common.base.Preconditions; import com.google.common.base.Strings; +import org.apache.hadoop.fs.azurebfs.services.AbfsOutputStreamStatisticsImpl; Review comment: needs to go in with the org.apache block This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1967: YARN-9898. Workaround of Netty-all dependency aarch64 support
hadoop-yetus commented on issue #1967: URL: https://github.com/apache/hadoop/pull/1967#issuecomment-618326748 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 1m 12s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | -1 :x: | test4tests | 0m 0s | The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. | ||| _ trunk Compile Tests _ | | +0 :ok: | mvndep | 0m 54s | Maven dependency ordering for branch | | +1 :green_heart: | mvninstall | 23m 12s | trunk passed | | -1 :x: | compile | 19m 38s | root in trunk failed. | | -1 :x: | mvnsite | 5m 41s | root in trunk failed. | | +1 :green_heart: | shadedclient | 65m 31s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 7m 1s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 11s | root in trunk failed. | | +0 :ok: | mvndep | 0m 31s | Maven dependency ordering for patch | | -1 :x: | mvninstall | 0m 11s | root in the patch failed. | | -1 :x: | mvninstall | 0m 9s | hadoop-hdfs in the patch failed. | | -1 :x: | mvninstall | 0m 9s | hadoop-hdfs-client in the patch failed. | | -1 :x: | mvninstall | 0m 10s | hadoop-project in the patch failed. | | -1 :x: | mvninstall | 0m 10s | hadoop-yarn-csi in the patch failed. | | -1 :x: | compile | 0m 11s | root in the patch failed. | | -1 :x: | javac | 0m 11s | root in the patch failed. | | -1 :x: | mvnsite | 0m 10s | root in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | xml | 0m 7s | The patch has no ill-formed XML file. | | -1 :x: | shadedclient | 0m 16s | patch has errors when building and testing our client artifacts. | | -1 :x: | javadoc | 0m 11s | root in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 11s | root in the patch failed. | | +0 :ok: | asflicense | 0m 12s | ASF License check generated no output? | | | | 78m 12s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1967 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient xml | | uname | Linux a8eda0d23848 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5958af4 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/branch-compile-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/branch-mvnsite-root.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/branch-mvninstall-root.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvninstall-root.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-client.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvninstall-hadoop-project.txt | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvninstall-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-csi.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-compile-root.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-compile-root.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-mvnsite-root.txt | | javadoc | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-javadoc-root.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1967/4/artifact/out/patch-unit-root.txt | | Test Results |
[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1899: HADOOP-16914 Adding Output Stream Counters in ABFS
hadoop-yetus removed a comment on issue #1899: URL: https://github.com/apache/hadoop/pull/1899#issuecomment-614591927 :confetti_ball: **+1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 0m 32s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 0s | No case conflicting files found. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 4 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 18m 56s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 35s | trunk passed | | +1 :green_heart: | shadedclient | 14m 52s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 27s | trunk passed | | +0 :ok: | spotbugs | 0m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | ||| _ Patch Compile Tests _ | | +1 :green_heart: | mvninstall | 0m 28s | the patch passed | | +1 :green_heart: | compile | 0m 23s | the patch passed | | +1 :green_heart: | javac | 0m 23s | the patch passed | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | +1 :green_heart: | mvnsite | 0m 27s | the patch passed | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 14m 5s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | +1 :green_heart: | findbugs | 0m 54s | the patch passed | ||| _ Other Tests _ | | +1 :green_heart: | unit | 1m 22s | hadoop-azure in the patch passed. | | +1 :green_heart: | asflicense | 0m 31s | The patch does not generate ASF License warnings. | | | | 57m 19s | | | Subsystem | Report/Notes | |--:|:-| | Docker | Client=19.03.8 Server=19.03.8 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1899 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6ca25026889e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / cc5c1da | | Default Java | 1.8.0_242 | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/11/testReport/ | | Max. process+thread count | 400 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1899/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.11.1 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Commented] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090445#comment-17090445 ] Hadoop QA commented on HADOOP-17010: | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 13s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 49s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 18m 8s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 18m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 55s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 2m 10s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 8s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 17m 18s{color} | {color:red} root in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 17m 18s{color} | {color:red} root in the patch failed. {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 45s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 6 new + 109 unchanged - 0 fixed = 115 total (was 109) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 15s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 41s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 48s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}113m 54s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/PreCommit-HADOOP-Build/16908/artifact/out/Dockerfile | | JIRA Issue | HADOOP-17010 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/13000917/HADOOP-17010.001.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux a9331b2c69a1 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5958af4 | | Default Java | Private Build-1.8.0_242-8u242-b08-0ubuntu3~16.04-b08 | | compile | https://builds.apache.org/job/PreCommit-HADOOP-Build/16908/artifact/out/branch-compile-root.txt | | compile |
[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build
[ https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090374#comment-17090374 ] Mingliang Liu commented on HADOOP-17007: Yes I can also reproduce with similar stack on MacOS with Java 8 on {{trunk}}. > hadoop-cos fails to build > - > > Key: HADOOP-17007 > URL: https://issues.apache.org/jira/browse/HADOOP-17007 > Project: Hadoop Common > Issue Type: Bug > Components: fs/cos >Reporter: Wei-Chiu Chuang >Priority: Major > Labels: release-blocker > > Found the following compilation error in a PR precommit. The failure doesn't > seem related to the PR itself. Cant' reproduce locally though. > https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt > {noformat} > [INFO] Apache Hadoop Tencent COS Support .. FAILURE [ 0.074 > s] > [INFO] Apache Hadoop Cloud Storage SKIPPED > [INFO] Apache Hadoop Cloud Storage Project SKIPPED > [INFO] > > [INFO] BUILD FAILURE > [INFO] > > [INFO] Total time: 17:31 min > [INFO] Finished at: 2020-04-22T07:37:51+00:00 > [INFO] Final Memory: 192M/1714M > [INFO] > > [ERROR] Failed to execute goal > org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies > (package) on project hadoop-cos: Artifact has not been packaged yet. When > used on reactor artifact, copy should be executed after packaging: see > MDEP-187. -> [Help 1] > {noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17010: Attachment: HADOOP-17010.001.patch > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-17010.001.patch > > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17010: Attachment: (was: HADOOP-17010.001.patch) > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17010: Status: Patch Available (was: In Progress) > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-17010.001.patch > > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Work started] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HADOOP-17010 started by Fengnan Li. --- > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-17010.001.patch > > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Updated] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
[ https://issues.apache.org/jira/browse/HADOOP-17010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HADOOP-17010: Attachment: HADOOP-17010.001.patch > Add queue capacity weights support in FairCallQueue > --- > > Key: HADOOP-17010 > URL: https://issues.apache.org/jira/browse/HADOOP-17010 > Project: Hadoop Common > Issue Type: New Feature >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Major > Attachments: HADOOP-17010.001.patch > > > Right now in FairCallQueue all subqueues share the same capacity by evenly > distributing total capacity. This requested feature is to make subqueues able > to have different queue capacity where more important queues can have more > capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[jira] [Created] (HADOOP-17010) Add queue capacity weights support in FairCallQueue
Fengnan Li created HADOOP-17010: --- Summary: Add queue capacity weights support in FairCallQueue Key: HADOOP-17010 URL: https://issues.apache.org/jira/browse/HADOOP-17010 Project: Hadoop Common Issue Type: New Feature Reporter: Fengnan Li Assignee: Fengnan Li Right now in FairCallQueue all subqueues share the same capacity by evenly distributing total capacity. This requested feature is to make subqueues able to have different queue capacity where more important queues can have more capacity, thus less queue overflow and client backoffs. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org
[GitHub] [hadoop] hadoop-yetus commented on issue #1969: HADOOP-17002. ABFS: Adding config to determine if the account is HNS enabled or not
hadoop-yetus commented on issue #1969: URL: https://github.com/apache/hadoop/pull/1969#issuecomment-618223032 :broken_heart: **-1 overall** | Vote | Subsystem | Runtime | Comment | |::|--:|:|:| | +0 :ok: | reexec | 24m 55s | Docker mode activated. | ||| _ Prechecks _ | | +1 :green_heart: | dupname | 0m 1s | No case conflicting files found. | | +0 :ok: | markdownlint | 0m 0s | markdownlint was not available. | | +1 :green_heart: | @author | 0m 0s | The patch does not contain any @author tags. | | +1 :green_heart: | test4tests | 0m 0s | The patch appears to include 3 new or modified test files. | ||| _ trunk Compile Tests _ | | +1 :green_heart: | mvninstall | 19m 9s | trunk passed | | +1 :green_heart: | compile | 0m 31s | trunk passed | | +1 :green_heart: | checkstyle | 0m 24s | trunk passed | | +1 :green_heart: | mvnsite | 0m 34s | trunk passed | | +1 :green_heart: | shadedclient | 14m 53s | branch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 26s | trunk passed | | +0 :ok: | spotbugs | 0m 52s | Used deprecated FindBugs config; considering switching to SpotBugs. | | +1 :green_heart: | findbugs | 0m 50s | trunk passed | ||| _ Patch Compile Tests _ | | -1 :x: | mvninstall | 0m 23s | hadoop-azure in the patch failed. | | -1 :x: | compile | 0m 24s | hadoop-azure in the patch failed. | | -1 :x: | javac | 0m 24s | hadoop-azure in the patch failed. | | +1 :green_heart: | checkstyle | 0m 16s | the patch passed | | -1 :x: | mvnsite | 0m 25s | hadoop-azure in the patch failed. | | +1 :green_heart: | whitespace | 0m 0s | The patch has no whitespace issues. | | +1 :green_heart: | shadedclient | 13m 42s | patch has no errors when building and testing our client artifacts. | | +1 :green_heart: | javadoc | 0m 23s | the patch passed | | -1 :x: | findbugs | 0m 28s | hadoop-azure in the patch failed. | ||| _ Other Tests _ | | -1 :x: | unit | 0m 28s | hadoop-azure in the patch failed. | | +1 :green_heart: | asflicense | 0m 32s | The patch does not generate ASF License warnings. | | | | 80m 17s | | | Subsystem | Report/Notes | |--:|:-| | Docker | ClientAPI=1.40 ServerAPI=1.40 base: https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/Dockerfile | | GITHUB PR | https://github.com/apache/hadoop/pull/1969 | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle markdownlint | | uname | Linux 4073c7e65a29 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | personality/hadoop.sh | | git revision | trunk / 5958af4 | | Default Java | Private Build-1.8.0_252-8u252-b09-1~16.04-b09 | | mvninstall | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt | | compile | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | javac | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-compile-hadoop-tools_hadoop-azure.txt | | mvnsite | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt | | findbugs | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-findbugs-hadoop-tools_hadoop-azure.txt | | unit | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/artifact/out/patch-unit-hadoop-tools_hadoop-azure.txt | | Test Results | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/testReport/ | | Max. process+thread count | 481 (vs. ulimit of 5500) | | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure | | Console output | https://builds.apache.org/job/hadoop-multibranch/job/PR-1969/11/console | | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 | | Powered by | Apache Yetus 0.12.0 https://yetus.apache.org | This message was automatically generated. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org - To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: common-issues-h...@hadoop.apache.org