[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota
[ https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435018#comment-16435018 ] Yiqun Lin commented on HDFS-13346: -- Hi [~liuhongtong], I did a quick look for your patch. I'm afraid you are missing my previous proposal, :P. {quote}So I mean following way should be still okay: {noformat} One better way is that we invoke update operation this.rpcServer.setQuota in RouterAdminServer, not in RouterQuotaUpdateService. {noformat} liuhongtong, I think you can go ahead by this way, . {quote} > RBF: Fix synchronization of router quota and ns quota > - > > Key: HDFS-13346 > URL: https://issues.apache.org/jira/browse/HDFS-13346 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Major > Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch, > HDFS-13346.003.patch > > > Check Router Quota and ns Quota: > {code} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 150/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Update Router Quota: > {code:java} > $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400 > Successfully set quota for mount point /ns10t > {code} > Check Router Quota and ns Quota: > {code:java} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 400/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Now Router Quota has updated successfully, but ns Quota not. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13434) RBF: Fix dead links link RBF document
[ https://issues.apache.org/jira/browse/HDFS-13434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16435005#comment-16435005 ] Yiqun Lin commented on HDFS-13434: -- Hi [~ajisakaa], do you mind adding the description for this? It will make this more clear. Thanks. > RBF: Fix dead links link RBF document > - > > Key: HDFS-13434 > URL: https://issues.apache.org/jira/browse/HDFS-13434 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: documentation >Reporter: Akira Ajisaka >Priority: Major > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13435) RBF: Fix wrong error loggings
[ https://issues.apache.org/jira/browse/HDFS-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13435: - Summary: RBF: Fix wrong error loggings (was: RBF: Fix wrong error logs) > RBF: Fix wrong error loggings > - > > Key: HDFS-13435 > URL: https://issues.apache.org/jira/browse/HDFS-13435 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13435.001.patch > > > There are many places that using {{Logger.error(String format, Object... > arguments)}} incorrectly. > A example: > {code:java} > LOG.error("Cannot remove {}", path, e); > {code} > The exception passed here is no meaning and won't be printed. Actually it > should be update to > {code:java} > LOG.error("Cannot remove {}: {}.", path, e.getMessage()); > {code} > or > {code:java} > LOG.error("Cannot remove " + path, e)); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434997#comment-16434997 ] Shashikant Banerjee commented on HDFS-13413: Thanks [~bharatviswa], for the review. Patch v1 addresses your review comments. Please have a look. > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch, > HDFS-13413-HDFS-7240.001.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13435) RBF: Fix wrong error logs
[ https://issues.apache.org/jira/browse/HDFS-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13435: - Description: There are many places that using {{Logger.error(String format, Object... arguments)}} incorrectly. A example: {code:java} LOG.error("Cannot remove {}", path, e); {code} The exception passed here is no meaning and won't be printed. Actually it should be update to {code:java} LOG.error("Cannot remove {}: {}.", path, e.getMessage()); {code} or {code:java} LOG.error("Cannot remove " + path, e)); {code} > RBF: Fix wrong error logs > - > > Key: HDFS-13435 > URL: https://issues.apache.org/jira/browse/HDFS-13435 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13435.001.patch > > > There are many places that using {{Logger.error(String format, Object... > arguments)}} incorrectly. > A example: > {code:java} > LOG.error("Cannot remove {}", path, e); > {code} > The exception passed here is no meaning and won't be printed. Actually it > should be update to > {code:java} > LOG.error("Cannot remove {}: {}.", path, e.getMessage()); > {code} > or > {code:java} > LOG.error("Cannot remove " + path, e)); > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13413: --- Attachment: HDFS-13413-HDFS-7240.001.patch > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch, > HDFS-13413-HDFS-7240.001.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13435) RBF: Fix wrong error logs
[ https://issues.apache.org/jira/browse/HDFS-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13435: - Status: Patch Available (was: Open) > RBF: Fix wrong error logs > - > > Key: HDFS-13435 > URL: https://issues.apache.org/jira/browse/HDFS-13435 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13435.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13435) RBF: Fix wrong error logs
[ https://issues.apache.org/jira/browse/HDFS-13435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13435: - Attachment: HDFS-13435.001.patch > RBF: Fix wrong error logs > - > > Key: HDFS-13435 > URL: https://issues.apache.org/jira/browse/HDFS-13435 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.1 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13435.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13435) RBF: Fix wrong error logs
Yiqun Lin created HDFS-13435: Summary: RBF: Fix wrong error logs Key: HDFS-13435 URL: https://issues.apache.org/jira/browse/HDFS-13435 Project: Hadoop HDFS Issue Type: Sub-task Affects Versions: 3.0.1 Reporter: Yiqun Lin Assignee: Yiqun Lin -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota
[ https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434949#comment-16434949 ] genericqa commented on HDFS-13346: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 30m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 16s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 14s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 12m 5s{color} | {color:red} hadoop-hdfs-rbf in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 72m 5s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterAdminCLI | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13346 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918668/HDFS-13346.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 421aeb2cc75f 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c7cd362 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23893/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23893/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt
[jira] [Commented] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434947#comment-16434947 ] Bharat Viswanadham commented on HDFS-13413: --- Hi [~shashikant] Thanks for reporting and working on this. Patch LGTM. One minor NIT: RegistereEndPointTask.java: Line 102-109 In 2 of the preconditions, it is mentioned as datanode Id, and in one it is mentioned as datanode ID. Can, we mention in all the places as datanode ID to be consistent. > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13434) RBF: Fix dead links link RBF document
Akira Ajisaka created HDFS-13434: Summary: RBF: Fix dead links link RBF document Key: HDFS-13434 URL: https://issues.apache.org/jira/browse/HDFS-13434 Project: Hadoop HDFS Issue Type: Sub-task Components: documentation Reporter: Akira Ajisaka -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient
[ https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-13431: --- Status: Patch Available (was: Open) > Ozone: Ozone Shell should use RestClient and RpcClient > -- > > Key: HDFS-13431 > URL: https://issues.apache.org/jira/browse/HDFS-13431 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13431-HDFS-7240.001.patch > > > Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and > RpcClient instead of OzoneRestClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434920#comment-16434920 ] Lokesh Jain commented on HDFS-13425: [~msingh] Thanks for reviewing the patch! v3 patch addresses your comments. > Ozone: Clean-up of ozone related change from hadoop-common-project > -- > > Key: HDFS-13425 > URL: https://issues.apache.org/jira/browse/HDFS-13425 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13425-HDFS-7240.001.patch, > HDFS-13425-HDFS-7240.002.patch, HDFS-13425-HDFS-7240.003.patch > > > This jira is for tracking the clean-up and revert the changes made in > hadoop-common-project which are related to ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-13425: --- Attachment: HDFS-13425-HDFS-7240.003.patch > Ozone: Clean-up of ozone related change from hadoop-common-project > -- > > Key: HDFS-13425 > URL: https://issues.apache.org/jira/browse/HDFS-13425 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13425-HDFS-7240.001.patch, > HDFS-13425-HDFS-7240.002.patch, HDFS-13425-HDFS-7240.003.patch > > > This jira is for tracking the clean-up and revert the changes made in > hadoop-common-project which are related to ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434908#comment-16434908 ] genericqa commented on HDFS-13388: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 7s{color} | {color:red} HDFS-13388 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13388 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918666/HADOOP-13388.0009.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23892/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434817#comment-16434817 ] Jinglun edited comment on HDFS-13388 at 4/12/18 3:42 AM: - Hi [~elgoiri], patch 007 is an addendum for what we committed. How about we go with the first option? I upload a new patch 009 which includes all the changes. I also add a new test case to test whether RequestHedgingInvocationHandler throws the right Exception. I will submit the patch when the current commit is reverted from trunk, nor jenkins won't let me pass. I'm also ok with the other 2 options, so if you think other option is better, I could give a new patch or open a new jira. was (Author: lijinglun): hi [~elgoiri], patch 007 is an addendum for what we committed. I prefer the first option, and patch 008 includes all the changes. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13346) RBF: Fix synchronization of router quota and ns quota
[ https://issues.apache.org/jira/browse/HDFS-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liuhongtong updated HDFS-13346: --- Attachment: HDFS-13346.003.patch > RBF: Fix synchronization of router quota and ns quota > - > > Key: HDFS-13346 > URL: https://issues.apache.org/jira/browse/HDFS-13346 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Major > Attachments: HDFS-13346.001.patch, HDFS-13346.002.patch, > HDFS-13346.003.patch > > > Check Router Quota and ns Quota: > {code} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 150/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Update Router Quota: > {code:java} > $ hdfs dfsrouteradmin -setQuota /ns10t -nsQuota 400 > Successfully set quota for mount point /ns10t > {code} > Check Router Quota and ns Quota: > {code:java} > $ hdfs dfsrouteradmin -ls /ns10t > Mount Table Entries: > SourceDestinations Owner > Group Mode Quota/Usage > /ns10tns10->/ns10t hadp > hadp rwxr-xr-x [NsQuota: 400/319, > SsQuota: -/-] > /ns10t/ns1mountpoint ns1->/a/tthadp > hadp rwxr-xr-x [NsQuota: -/-, SsQuota: > -/-] > $ hdfs dfs -count -q hdfs://ns10/ns10t > 150-155none inf3 > 302 0 hdfs://ns10/ns10t > {code} > Now Router Quota has updated successfully, but ns Quota not. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434887#comment-16434887 ] Mukul Kumar Singh commented on HDFS-13425: -- Thanks for the updated patch [~ljain]. Please find my comments as following ContainerSupervisor:115 -> maxContainerReportThreads is not being used. Lets copy the thread pool creation from Hadoop Executor to this method so that the number of threads can be controlled using a configuration variable. > Ozone: Clean-up of ozone related change from hadoop-common-project > -- > > Key: HDFS-13425 > URL: https://issues.apache.org/jira/browse/HDFS-13425 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Nanda kumar >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13425-HDFS-7240.001.patch, > HDFS-13425-HDFS-7240.002.patch > > > This jira is for tracking the clean-up and revert the changes made in > hadoop-common-project which are related to ozone. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-13388: --- Attachment: HADOOP-13388.0009.patch > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch, > HADOOP-13388.0009.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434876#comment-16434876 ] Yiqun Lin commented on HDFS-13418: -- LGTM. [~Tao Jie], would you mind attaching the patch for branch-2? Looks like current patch cannot apply to branch-2 well. > NetworkTopology should be configurable when enable DFSNetworkTopology > -- > > Key: HDFS-13418 > URL: https://issues.apache.org/jira/browse/HDFS-13418 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.1 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HDFS-13418.001.patch, HDFS-13418.002.patch, > HDFS-13418.003.patch, HDFS-13418.004.patch > > > In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set > DFSNetworkTopology as the default implementation. > We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in > core-site.default. Actually this property does not effect once > {{dfs.use.dfs.network.topology}} is true. > in {{DatanodeManager}},networkTopology is initialized as > {code} > if (useDfsNetworkTopology) { > networktopology = DFSNetworkTopology.getInstance(conf); > } else { > networktopology = NetworkTopology.getInstance(conf); > } > {code} > I think we should still make the NetworkTopology configurable rather than > hard code the implementation since we may need another NetworkTopology impl. > I am not sure if there is other consideration. Any thought? [~vagarychen] > [~linyiqun] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13428: - Fix Version/s: 3.0.4 2.9.2 3.1.1 3.2.0 2.10.0 > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2
[ https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434868#comment-16434868 ] Yiqun Lin commented on HDFS-12615: -- Thanks [~ywskycn] and [~elgoiri] for the comments. {quote}Regarding rebalancer, it is tracked in HDFS-13123. Let me put the poc patch this week, and clear the implementation details, and then we can split out some sub-tasks there. {quote} Looking forward to seeing the patch, :). {quote}As you mentioned we closed most of the opened tasks and now we have three big parts: ... I think tracking 1 and 2 in this umbrella is fine but I'm thinking on making the others their own umbrella: ... {quote} Agreed. Let's see if there are some comments from others. > Router-based HDFS federation phase 2 > > > Key: HDFS-12615 > URL: https://issues.apache.org/jira/browse/HDFS-12615 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > > This umbrella JIRA tracks set of improvements over the Router-based HDFS > federation (HDFS-10467). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434851#comment-16434851 ] BELUGA BEHR commented on HDFS-13428: [~elgoiri] [https://stackoverflow.com/questions/322715/when-to-use-linkedlist-over-arraylist] I'm just a tech working by day and enthusiast/hobbyist by night. I like contributing to the open source community, making the world a little bit better through coding. I just open random files and review them with a focus on polish and making small improvements for performance and code clarity/conciseness. > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434848#comment-16434848 ] genericqa commented on HDFS-13388: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-13388 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13388 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918656/HADOOP-13388.0008.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23891/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13427) Fix the section titles of transparent encryption document
[ https://issues.apache.org/jira/browse/HDFS-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-13427: - Fix Version/s: (was: 2.9.2) 2.9.1 Committed to branch-2.9.1 as well. > Fix the section titles of transparent encryption document > - > > Key: HDFS-13427 > URL: https://issues.apache.org/jira/browse/HDFS-13427 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.10.0, 2.9.1, 2.8.4, 3.2.0, 3.1.1, 3.0.3 > > Attachments: HDFS-13427.01.patch > > > {noformat} > The `crypto` command before Hadoop 2.8.0 does not provision the `.Trash` > directory automatically. If an encryption zone is created before Hadoop > 2.8.0, and then the cluster is upgraded to Hadoop 2.8.0 or above, the trash > directory can be provisioned using `-provisionTrash` option (e.g., `hdfs > crypto -provisionTrash -path /zone`). > Attack vectors > -- > {noformat} > The long sentence starts with 'The crypto' wrongly become the title. We need > to add a blank line between the sentence and 'Attack vectors' to fix this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13427) Fix the section titles of transparent encryption document
[ https://issues.apache.org/jira/browse/HDFS-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-13427: - Resolution: Fixed Fix Version/s: 3.0.3 2.9.2 3.1.1 3.2.0 2.8.4 2.10.0 Status: Resolved (was: Patch Available) Committed this to trunk, branch-3.1, branch-3.0, branch-2, branch-2.9, and branch-2.8. Thanks [~arpitagarwal]! > Fix the section titles of transparent encryption document > - > > Key: HDFS-13427 > URL: https://issues.apache.org/jira/browse/HDFS-13427 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Fix For: 2.10.0, 2.8.4, 3.2.0, 3.1.1, 2.9.2, 3.0.3 > > Attachments: HDFS-13427.01.patch > > > {noformat} > The `crypto` command before Hadoop 2.8.0 does not provision the `.Trash` > directory automatically. If an encryption zone is created before Hadoop > 2.8.0, and then the cluster is upgraded to Hadoop 2.8.0 or above, the trash > directory can be provisioned using `-provisionTrash` option (e.g., `hdfs > crypto -provisionTrash -path /zone`). > Attack vectors > -- > {noformat} > The long sentence starts with 'The crypto' wrongly become the title. We need > to add a blank line between the sentence and 'Attack vectors' to fix this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434817#comment-16434817 ] Jinglun commented on HDFS-13388: hi [~elgoiri], patch 007 is an addendum for what we committed. I prefer the first option, and patch 008 includes all the changes. > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-13388: --- Status: Patch Available (was: Open) > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-13388: --- Status: Open (was: Patch Available) > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jinglun updated HDFS-13388: --- Attachment: HADOOP-13388.0008.patch > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch, HADOOP-13388.0008.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434798#comment-16434798 ] genericqa commented on HDFS-13413: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 41s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 17s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 12s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 17s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s{color} | {color:red} container-service in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 8s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 10s{color} | {color:red} container-service in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 13s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 13s{color} | {color:red} container-service in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 13s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 24s{color} | {color:red} The patch generated 5 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13413 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918643/HDFS-13413-HDFS-7240.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux a681f6f1cab2 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / ea85801 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | compile | https://builds.apache.org/job/PreCommit-HDFS-Build/23890/artifact/out/branch-compile-hadoop-hdds_container-service.txt | | mvnsite | https://builds.apache.org/job/PreCommit-HDFS
[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434796#comment-16434796 ] Dibyendu Karmakar commented on HDFS-13386: -- Thanks [~elgoiri] :) > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, > HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, > HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, > image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster
[ https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13433: - Description: In the following HA+Federated setup with two nameservices ns1 and ns2: # ns1 -> namenodes nn1, nn2 # ns2 -> namenodes nn3, nn4 # fs.defaultFS is {{hdfs://ns1}}. A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} using fs.defaultFS before the config is overriden. was: In the following HA+Federated setup: # NS1 -> namenodes nn1, nn2 # NS2 -> namenodes nn3, nn4 # fs.defaultFS is {{hdfs://ns1}}. A webhdfs request issued to nn3 will be routed to NS1. This is because {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} using fs.defaultFS before the config is overriden. > webhdfs requests can be routed incorrectly in federated cluster > --- > > Key: HDFS-13433 > URL: https://issues.apache.org/jira/browse/HDFS-13433 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Critical > > In the following HA+Federated setup with two nameservices ns1 and ns2: > # ns1 -> namenodes nn1, nn2 > # ns2 -> namenodes nn3, nn4 > # fs.defaultFS is {{hdfs://ns1}}. > A webhdfs request issued to nn3/nn4 will be routed to ns1. This is because > {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} > using fs.defaultFS before the config is overriden. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster
[ https://issues.apache.org/jira/browse/HDFS-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13433: - Description: In the following HA+Federated setup: # NS1 -> namenodes nn1, nn2 # NS2 -> namenodes nn3, nn4 # fs.defaultFS is {{hdfs://ns1}}. A webhdfs request issued to nn3 will be routed to NS1. This is because {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} using fs.defaultFS before the config is overriden. was: In the following HA+Federated setup: # NS1 -> namenodes nn1, nn2 # NS2 -> namenodes nn3, nn4 # fs.defaultFS is {{hdfs://ns1}}. A webhdfs request issued to nn3 will be routed to nn1. This is because {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} using fs.defaultFS before the config is overriden. > webhdfs requests can be routed incorrectly in federated cluster > --- > > Key: HDFS-13433 > URL: https://issues.apache.org/jira/browse/HDFS-13433 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Critical > > In the following HA+Federated setup: > # NS1 -> namenodes nn1, nn2 > # NS2 -> namenodes nn3, nn4 > # fs.defaultFS is {{hdfs://ns1}}. > A webhdfs request issued to nn3 will be routed to NS1. This is because > {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} > using fs.defaultFS before the config is overriden. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13433) webhdfs requests can be routed incorrectly in federated cluster
Arpit Agarwal created HDFS-13433: Summary: webhdfs requests can be routed incorrectly in federated cluster Key: HDFS-13433 URL: https://issues.apache.org/jira/browse/HDFS-13433 Project: Hadoop HDFS Issue Type: Bug Reporter: Arpit Agarwal Assignee: Arpit Agarwal In the following HA+Federated setup: # NS1 -> namenodes nn1, nn2 # NS2 -> namenodes nn3, nn4 # fs.defaultFS is {{hdfs://ns1}}. A webhdfs request issued to nn3 will be routed to nn1. This is because {{setClientNamenodeAddress}} initializes {{NameNode#clientNamenodeAddress}} using fs.defaultFS before the config is overriden. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434729#comment-16434729 ] Erik Krogen commented on HDFS-13399: {quote} Just to make sure we are on the same page. Hadoop requires backward compatibility for minor releases. So please NO breaking! As I look at it we cannot break compatibility ever: we can only add new methods, parameters, etc. and deprecate old ones. Package-private is not a part of API, so it can change. {quote} This is not generally correct. While it is true for Public/Stable interfaces, it is not for Evolving interfaces. Please see the published [Hadoop compatibility guidelines|https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/InterfaceClassification.html#Evolving]: {quote} Evolving An Evolving interface is typically exposed so that users or external code can make use of a feature before it has stabilized. The expectation that an interface should “eventually” stabilize and be promoted to Stable, however, is not a requirement for the interface to be labeled as Evolving. Incompatible changes are allowed for Evolving interface only at minor releases. Incompatible changes allowed: minor (x.Y.0) Compatible changes allowed: maintenance (x.y.Z) {quote} > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434705#comment-16434705 ] Konstantin Shvachko commented on HDFS-13399: ??We are allowed to break compatibility at minor releases.?? Just to make sure we are on the same page. Hadoop *requires* backward compatibility for minor releases. So please NO breaking! As I look at it we cannot break compatibility ever: we can only add new methods, parameters, etc. and deprecate old ones. Package-private is not a part of API, so it can change. ??Should we consider creating client and server side configuration for enabling / disabling AlignmentContext processing??? I don't think we need a config parameter for that. This is controlled by the code. If you pass an {{AlignmentContext}} as non null parameter it is used. M/R clients for example wont need it, so null will be used by default. > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13413: --- Attachment: HDFS-13413-HDFS-7240.000.patch > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13413: --- Attachment: (was: HDFS-13413-HDFS-7240.000.patch) > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434697#comment-16434697 ] Shashikant Banerjee commented on HDFS-13413: Patch v0 makes DatanodeUUid and Cluster Id in the SCM registration response for a datanode as mandatory fields and adds the necessary validation while processing the response. > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13413: --- Status: Patch Available (was: Open) > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13413) ClusterId and DatanodeUuid should be marked mandatory fileds in SCMRegisteredCmdResponseProto
[ https://issues.apache.org/jira/browse/HDFS-13413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-13413: --- Attachment: HDFS-13413-HDFS-7240.000.patch > ClusterId and DatanodeUuid should be marked mandatory fileds in > SCMRegisteredCmdResponseProto > - > > Key: HDFS-13413 > URL: https://issues.apache.org/jira/browse/HDFS-13413 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Minor > Fix For: HDFS-7240 > > Attachments: HDFS-13413-HDFS-7240.000.patch > > > ClusterId as well as the DatanodeUuid are optional fields in > {{SCMRegisteredCmdResponseProto}} > currently. We have to make both clusterId and DatanodeUuid as required field > and handle it properly. As of now, we don't do anything with the response of > datanode registration. We should validate the clusterId and also the > datanodeUuid -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8824) Do not use small blocks for balancing the cluster
[ https://issues.apache.org/jira/browse/HDFS-8824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434671#comment-16434671 ] Wei-Chiu Chuang commented on HDFS-8824: --- Looks like HDFS-13222 address Kihwal's concern. It makes it easy to tune the small block parameter from balancer side. > Do not use small blocks for balancing the cluster > - > > Key: HDFS-8824 > URL: https://issues.apache.org/jira/browse/HDFS-8824 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: balancer & mover >Reporter: Tsz Wo Nicholas Sze >Assignee: Tsz Wo Nicholas Sze >Priority: Major > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: h8824_20150727b.patch, h8824_20150811b.patch > > > Balancer gets datanode block lists from NN and then move the blocks in order > to balance the cluster. It should not use the blocks with small size since > moving the small blocks generates a lot of overhead and the small blocks do > not help balancing the cluster much. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9412) getBlocks occupies FSLock and takes too long to complete
[ https://issues.apache.org/jira/browse/HDFS-9412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei-Chiu Chuang updated HDFS-9412: -- Release Note: Skip blocks with size below dfs.balancer.getBlocks.min-block-size (default 10MB) when a balancer asks for a list of blocks. > getBlocks occupies FSLock and takes too long to complete > > > Key: HDFS-9412 > URL: https://issues.apache.org/jira/browse/HDFS-9412 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer & mover, namenode >Reporter: He Tianyi >Assignee: He Tianyi >Priority: Major > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha1 > > Attachments: HDFS-9412-branch-2.7.00.patch, HDFS-9412..patch, > HDFS-9412.0001.patch, HDFS-9412.0002.patch > > > {{getBlocks}} in {{NameNodeRpcServer}} acquires a read lock then may take a > long time to complete (probably several seconds, if number of blocks are too > much). > During this period, other threads attempting to acquire write lock will wait. > In an extreme case, RPC handlers are occupied by one reader thread calling > {{getBlocks}} and all other threads waiting for write lock, rpc server acts > like hung. Unfortunately, this tends to happen in heavy loaded cluster, since > read operations come and go fast (they do not need to wait), leaving write > operations waiting. > Looks like we can optimize this thing like DN block report did in past, by > splitting the operation into smaller sub operations, and let other threads do > their work between each sub operation. The whole result is returned at once, > though (one thing different from DN block report). > I am not sure whether this will work. Any better idea? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13399) Make Client field AlignmentContext non-static.
[ https://issues.apache.org/jira/browse/HDFS-13399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434588#comment-16434588 ] Erik Krogen commented on HDFS-13399: (1) I just saw the link error and assumed it was related since this patch was doing some changes around the relevant methods. If it is not, then no worries. (2) If there are no other usages I think your proposal is fine. {{Client}} is marked as {{Public}} but {{Evolving}}, and this a package-private method. We are allowed to break compatibility at minor releases. Ping [~shv] again now that the patch is pretty well stabilized. > Make Client field AlignmentContext non-static. > -- > > Key: HDFS-13399 > URL: https://issues.apache.org/jira/browse/HDFS-13399 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: HDFS-12943 >Reporter: Plamen Jeliazkov >Assignee: Plamen Jeliazkov >Priority: Major > Attachments: HDFS-13399-HDFS-12943.000.patch, > HDFS-13399-HDFS-12943.001.patch > > > In HDFS-12977, DFSClient's constructor was altered to make use of a new > static method in Client that allowed one to set an AlignmentContext. This > work is to remove that static field and make each DFSClient pass it's > AlignmentContext down to the proxy Call level. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-12707) Ozone: start-all script is missing ozone start
[ https://issues.apache.org/jira/browse/HDFS-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham resolved HDFS-12707. --- Resolution: Won't Fix > Ozone: start-all script is missing ozone start > -- > > Key: HDFS-12707 > URL: https://issues.apache.org/jira/browse/HDFS-12707 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12707) Ozone: start-all script is missing ozone start
[ https://issues.apache.org/jira/browse/HDFS-12707?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434552#comment-16434552 ] Bharat Viswanadham commented on HDFS-12707: --- Closing this, as start-ozone.sh is used to start ozone deamons and also it starts start-dfs.sh to start deamons required for ozone. > Ozone: start-all script is missing ozone start > -- > > Key: HDFS-12707 > URL: https://issues.apache.org/jira/browse/HDFS-12707 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > start-all script is missing ozone start -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
[ https://issues.apache.org/jira/browse/HDFS-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434538#comment-16434538 ] genericqa commented on HDFS-13432: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 6 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 34s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 26s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 18s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 15s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 16s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 59s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 13s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 11s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 12s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 8s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 16s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 15s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 16s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 17s{color} | {color:red} server-scm in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 17s{color} | {color:red} container-service in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 17s{color}
[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow
[ https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434512#comment-16434512 ] Arpit Khare commented on HDFS-13398: [~ajaysachdev] : Is this the same functionality which we are trying to achive? https://issues.apache.org/jira/browse/HDFS-11786 > Hdfs recursive listing operation is very slow > - > > Key: HDFS-13398 > URL: https://issues.apache.org/jira/browse/HDFS-13398 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.1 > Environment: HCFS file system where HDP 2.6.1 is connected to ECS > (Object Store). >Reporter: Ajay Sachdev >Priority: Major > Fix For: 2.7.1 > > Attachments: parallelfsPatch > > > The hdfs dfs -ls -R command is sequential in nature and is very slow for a > HCFS system. We have seen around 6 mins for 40K directory/files structure. > The proposal is to use multithreading approach to speed up recursive list, du > and count operations. > We have tried a ForkJoinPool implementation to improve performance for > recursive listing operation. > [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli] > commit id : > 82387c8cd76c2e2761bd7f651122f83d45ae8876 > Another implementation is to use Java Executor Service to improve performance > to run listing operation in multiple threads in parallel. This has > significantly reduced the time to 40 secs from 6 mins. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13398) Hdfs recursive listing operation is very slow
[ https://issues.apache.org/jira/browse/HDFS-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434508#comment-16434508 ] Arpit Khare commented on HDFS-13398: Cc: [~arpitagarwal] > Hdfs recursive listing operation is very slow > - > > Key: HDFS-13398 > URL: https://issues.apache.org/jira/browse/HDFS-13398 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.7.1 > Environment: HCFS file system where HDP 2.6.1 is connected to ECS > (Object Store). >Reporter: Ajay Sachdev >Priority: Major > Fix For: 2.7.1 > > Attachments: parallelfsPatch > > > The hdfs dfs -ls -R command is sequential in nature and is very slow for a > HCFS system. We have seen around 6 mins for 40K directory/files structure. > The proposal is to use multithreading approach to speed up recursive list, du > and count operations. > We have tried a ForkJoinPool implementation to improve performance for > recursive listing operation. > [https://github.com/jasoncwik/hadoop-release/tree/parallel-fs-cli] > commit id : > 82387c8cd76c2e2761bd7f651122f83d45ae8876 > Another implementation is to use Java Executor Service to improve performance > to run listing operation in multiple threads in parallel. This has > significantly reduced the time to 40 secs from 6 mins. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434507#comment-16434507 ] genericqa commented on HDFS-13430: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 45s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 1s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 17s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 0s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 41s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}146m 49s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDirectoryScanner | | | hadoop.hdfs.qjournal.server.TestJournalNodeSync | | | hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport | | | hadoop.hdfs.server.balancer.TestBalancerRPCDelay | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13430 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918594/HDFS-13430.01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 54934689ccbc 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f7d5bac | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23888/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23888/testReport/ | | Max. process+thread count | 3239 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: h
[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434446#comment-16434446 ] Hudson commented on HDFS-13386: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13974 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13974/]) HDFS-13386. RBF: Wrong date information in list file(-ls) result. (inigoiri: rev 18de6f2042b70f9f0d7a2620c60de022768a7b13) * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcServer.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterMountTable.java > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, > HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, > HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, > image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1643#comment-1643 ] genericqa commented on HDFS-13311: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 27m 10s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 29m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 25s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 22s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 18s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 38s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}127m 31s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13311 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12917209/HDFS-13311.000.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 38973c747b42 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 10:45:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 0c93d43 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23886/testReport/ | | Max. process+thread count | 1347 (vs. ulimit of 1) | | modules | C: hadoop-common-project/hadoop-common U: hadoop-common-project/hadoop-common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23886/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: TestRouterAdminCLI#te
[jira] [Updated] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
[ https://issues.apache.org/jira/browse/HDFS-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13432: -- Status: Patch Available (was: Open) > Ozone: When datanodes register, send NodeReport and ContainerReport > --- > > Key: HDFS-13432 > URL: https://issues.apache.org/jira/browse/HDFS-13432 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13432-HDFS-7240.00.patch > > > From chillmode Deisgn Notes: > As part of this Jira, will update register to send NodeReport and > ContaineReport. > Current Datanodes, send one heartbeat per 30 seconds. That means that even if > the datanode is ready it will take around a 1 min or longer before the SCM > sees the datanode container reports. We can address this partially by making > sure that Register call contains both NodeReport and ContainerReport. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
[ https://issues.apache.org/jira/browse/HDFS-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13432: -- Attachment: HDFS-13432-HDFS-7240.00.patch > Ozone: When datanodes register, send NodeReport and ContainerReport > --- > > Key: HDFS-13432 > URL: https://issues.apache.org/jira/browse/HDFS-13432 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13432-HDFS-7240.00.patch > > > From chillmode Deisgn Notes: > As part of this Jira, will update register to send NodeReport and > ContaineReport. > Current Datanodes, send one heartbeat per 30 seconds. That means that even if > the datanode is ready it will take around a 1 min or longer before the SCM > sees the datanode container reports. We can address this partially by making > sure that Register call contains both NodeReport and ContainerReport. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13386: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.4 2.9.2 3.1.1 3.2.0 Status: Resolved (was: Patch Available) Thanks [~dibyendu_hadoop] for the fix, committed to trun, branch-3.1, branch-3.0, branch-2, and branch-2.9. > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Fix For: 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, > HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, > HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, > image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
[ https://issues.apache.org/jira/browse/HDFS-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13432: -- Description: >From chillmode Deisgn Notes: As part of this Jira, will update register to send NodeReport and ContaineReport. Current Datanodes, send one heartbeat per 30 seconds. That means that even if the datanode is ready it will take around a 1 min or longer before the SCM sees the datanode container reports. We can address this partially by making sure that Register call contains both NodeReport and ContainerReport. was: >From chillmode Deisgn Notes: As part of this Jira, will update register to send NodeReport and ContaineReport. Current Datanodes, send one heartbeat per 30 seconds. That means that even if the datanode is ready it will take around a 1 min or longer before the SCM sees the datanode container reports. We can address this partially be making sure that Register call contains both NodeReport and ContainerReport. > Ozone: When datanodes register, send NodeReport and ContainerReport > --- > > Key: HDFS-13432 > URL: https://issues.apache.org/jira/browse/HDFS-13432 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > From chillmode Deisgn Notes: > As part of this Jira, will update register to send NodeReport and > ContaineReport. > Current Datanodes, send one heartbeat per 30 seconds. That means that even if > the datanode is ready it will take around a 1 min or longer before the SCM > sees the datanode container reports. We can address this partially by making > sure that Register call contains both NodeReport and ContainerReport. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
[ https://issues.apache.org/jira/browse/HDFS-13432?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Bharat Viswanadham updated HDFS-13432: -- Issue Type: Sub-task (was: Bug) Parent: HDFS-7240 > Ozone: When datanodes register, send NodeReport and ContainerReport > --- > > Key: HDFS-13432 > URL: https://issues.apache.org/jira/browse/HDFS-13432 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > > From chillmode Deisgn Notes: > As part of this Jira, will update register to send NodeReport and > ContaineReport. > Current Datanodes, send one heartbeat per 30 seconds. That means that even if > the datanode is ready it will take around a 1 min or longer before the SCM > sees the datanode container reports. We can address this partially be making > sure that Register call contains both NodeReport and ContainerReport. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13432) Ozone: When datanodes register, send NodeReport and ContainerReport
Bharat Viswanadham created HDFS-13432: - Summary: Ozone: When datanodes register, send NodeReport and ContainerReport Key: HDFS-13432 URL: https://issues.apache.org/jira/browse/HDFS-13432 Project: Hadoop HDFS Issue Type: Bug Reporter: Bharat Viswanadham Assignee: Bharat Viswanadham >From chillmode Deisgn Notes: As part of this Jira, will update register to send NodeReport and ContaineReport. Current Datanodes, send one heartbeat per 30 seconds. That means that even if the datanode is ready it will take around a 1 min or longer before the SCM sees the datanode container reports. We can address this partially be making sure that Register call contains both NodeReport and ContainerReport. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434420#comment-16434420 ] Virajith Jalaparti commented on HDFS-13311: --- +1 LGTM. > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434413#comment-16434413 ] genericqa commented on HDFS-13386: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 23m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 15s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 29s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 19s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 55s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 15m 22s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 3s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13386 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918593/HDFS-13386-007.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 6cb365a52c55 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / f7d5bac | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23887/testReport/ | | Max. process+thread count | 1335 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23887/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 >
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434392#comment-16434392 ] Xiao Chen commented on HDFS-13430: -- Thanks for offering [~shahrs87], feel free to commit once pre-commit comes back. Pretty confident you'll do fine. :) Let me know if any questions, and congrats on becoming a committer! > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434361#comment-16434361 ] Rushabh S Shah commented on HDFS-13430: --- [~xiaochen]: do you mind me committing this change. This will be a practice for me to commit in apache repo and if I screw up it won't harm that much. > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434344#comment-16434344 ] Hudson commented on HDFS-13428: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13972 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13972/]) HDFS-13428. RBF: Remove LinkedList From StateStoreFileImpl.java. (inigoiri: rev f7d5bace435a8de151b94ccc3599a6c4de8f7daf) * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/driver/impl/StateStoreFileImpl.java > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient
[ https://issues.apache.org/jira/browse/HDFS-13431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Lokesh Jain updated HDFS-13431: --- Attachment: HDFS-13431-HDFS-7240.001.patch > Ozone: Ozone Shell should use RestClient and RpcClient > -- > > Key: HDFS-13431 > URL: https://issues.apache.org/jira/browse/HDFS-13431 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Lokesh Jain >Assignee: Lokesh Jain >Priority: Major > Attachments: HDFS-13431-HDFS-7240.001.patch > > > Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and > RpcClient instead of OzoneRestClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13431) Ozone: Ozone Shell should use RestClient and RpcClient
Lokesh Jain created HDFS-13431: -- Summary: Ozone: Ozone Shell should use RestClient and RpcClient Key: HDFS-13431 URL: https://issues.apache.org/jira/browse/HDFS-13431 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Lokesh Jain Assignee: Lokesh Jain Currently Ozone Shell uses OzoneRestClient. We should use both RestClient and RpcClient instead of OzoneRestClient. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434327#comment-16434327 ] Rushabh S Shah commented on HDFS-13430: --- The fix looks good. +1 binding. > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434321#comment-16434321 ] Xiao Chen edited comment on HDFS-13430 at 4/11/18 5:56 PM: --- [http://dist-test.cloudera.org/job?job_id=hadoop.jenkins.1523420254.16257] {noformat} Error Message expected:<2> but was:<3> Stacktrace java.lang.AssertionError: expected:<2> but was:<3> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testDelegationToken(TestEncryptionZonesWithKMS.java:100) {noformat} was (Author: xiaochen): [http://dist-test.cloudera.org/job?job_id=hadoop.jenkins.1523420254.16257] {noformat} Error Message expected:<2> but was:<3> Stacktrace java.lang.AssertionError: expected:<2> but was:<3> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testDelegationToken(TestEncryptionZonesWithKMS.java:100) {noformat} > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434320#comment-16434320 ] genericqa commented on HDFS-13425: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 38s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 0s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 31s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 15s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 16m 48s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s{color} | {color:red} hadoop-hdds/common in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 19s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 28m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 28m 4s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 12s{color} | {color:orange} root: The patch generated 1 new + 240 unchanged - 1 fixed = 241 total (was 241) {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} server-scm in the patch failed. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 32s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} server-scm in the patch failed. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 27s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 16s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 23s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 41s{color} | {color:red} The patch generated 44 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}144m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13425 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918582/HDFS-13425-HDFS-7240.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 7107fc28bd59 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434321#comment-16434321 ] Xiao Chen commented on HDFS-13430: -- [http://dist-test.cloudera.org/job?job_id=hadoop.jenkins.1523420254.16257] {noformat} Error Message expected:<2> but was:<3> Stacktrace java.lang.AssertionError: expected:<2> but was:<3> at org.junit.Assert.fail(Assert.java:88) at org.junit.Assert.failNotEquals(Assert.java:743) at org.junit.Assert.assertEquals(Assert.java:118) at org.junit.Assert.assertEquals(Assert.java:555) at org.junit.Assert.assertEquals(Assert.java:542) at org.apache.hadoop.hdfs.TestEncryptionZonesWithKMS.testDelegationToken(TestEncryptionZonesWithKMS.java:100) {noformat} > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13428: --- Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) Thanks [~belugabehr] for the fix, committed to trunk branch-3.1, branch-3.0, branch-2, and branch-2.9. > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434291#comment-16434291 ] Rushabh S Shah edited comment on HDFS-13430 at 4/11/18 5:40 PM: -+1 binding.- [~xiaochen]: do you mind sharing the stack trace or error message ? was (Author: shahrs87): +1 binding. > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13428: --- Issue Type: Sub-task (was: Improvement) Parent: HDFS-12615 > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434291#comment-16434291 ] Rushabh S Shah commented on HDFS-13430: --- +1 binding. > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13430: - Status: Patch Available (was: Open) > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434283#comment-16434283 ] Xiao Chen commented on HDFS-13430: -- [~shahrs87] FYI, sorry missed this one > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
[ https://issues.apache.org/jira/browse/HDFS-13430?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiao Chen updated HDFS-13430: - Attachment: HDFS-13430.01.patch > Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 > -- > > Key: HDFS-13430 > URL: https://issues.apache.org/jira/browse/HDFS-13430 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Xiao Chen >Assignee: Xiao Chen >Priority: Major > Attachments: HDFS-13430.01.patch > > > Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the > hadoop-common precommit runs. > This is caught by our internal pre-commit using dist-test, and appears to be > the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13430) Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445
Xiao Chen created HDFS-13430: Summary: Fix TestEncryptionZonesWithKMS failure due to HADOOP-14445 Key: HDFS-13430 URL: https://issues.apache.org/jira/browse/HDFS-13430 Project: Hadoop HDFS Issue Type: Bug Reporter: Xiao Chen Assignee: Xiao Chen Attachments: HDFS-13430.01.patch Unfortunately HADOOP-14445 had an HDFS test failure that's not caught in the hadoop-common precommit runs. This is caught by our internal pre-commit using dist-test, and appears to be the only failure. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434266#comment-16434266 ] Íñigo Goiri commented on HDFS-13386: I added [^HDFS-13386-007.patch] fixing the check style. I'll commit it once Yetus comes back with a +1. Sorry [~dibyendu_hadoop] for going through this small style fixes... I'll try to have this done today. > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, > HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, > HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, > image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13386) RBF: Wrong date information in list file(-ls) result
[ https://issues.apache.org/jira/browse/HDFS-13386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13386: --- Attachment: HDFS-13386-007.patch > RBF: Wrong date information in list file(-ls) result > > > Key: HDFS-13386 > URL: https://issues.apache.org/jira/browse/HDFS-13386 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Dibyendu Karmakar >Assignee: Dibyendu Karmakar >Priority: Minor > Attachments: HDFS-13386-002.patch, HDFS-13386-003.patch, > HDFS-13386-004.patch, HDFS-13386-005.patch, HDFS-13386-006.patch, > HDFS-13386-007.patch, HDFS-13386.000.patch, HDFS-13386.001.patch, > image-2018-04-03-11-59-51-623.png > > > # hdfs dfs -ls > !image-2018-04-03-11-59-51-623.png! > this is happening because getMountPointDates is not implemented > {code:java} > private Map getMountPointDates(String path) { > Map ret = new TreeMap<>(); > // TODO add when we have a Mount Table > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12615) Router-based HDFS federation phase 2
[ https://issues.apache.org/jira/browse/HDFS-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434256#comment-16434256 ] Íñigo Goiri commented on HDFS-12615: [~linyiqun] thanks for following up. As you mentioned we closed most of the opened tasks and now we have three big parts: # Tasks that would be important to have like the one to keep locality (HDFS-13248) and the DBMS store (HDFS-13245). # Small fixes, improvements, and some new unit tests. # Big tasks like security, rebalancer, and the DNs interacting with the Routers. I think tracking 1 and 2 in this umbrella is fine but I'm thinking on making the others their own umbrella: * HDFS-12510: security will have 3 or more patches here including the local security, delegation tokens, documentation, etc. ([~zhengxg3] is working on this). * HDFS-13123: the rebalancer will require a few JIRAs like the store for the rebalancer logs, the rebalancer, unit tests, etc. ([~ywskycn] is taking care of this). * HDFS-13098: this will require a few subtasks and something similar to HDFS-13312. (I can take this). Any thoughts on this? Any other important feature missing or that would be good to have? > Router-based HDFS federation phase 2 > > > Key: HDFS-12615 > URL: https://issues.apache.org/jira/browse/HDFS-12615 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > > This umbrella JIRA tracks set of improvements over the Router-based HDFS > federation (HDFS-10467). -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
[ https://issues.apache.org/jira/browse/HDFS-13324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jitendra Nath Pandey updated HDFS-13324: Fix Version/s: HDFS-7240 > Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails > -- > > Key: HDFS-13324 > URL: https://issues.apache.org/jira/browse/HDFS-13324 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Shashikant Banerjee >Priority: Major > Labels: newbie > Fix For: HDFS-7240 > > Attachments: HDFS-13324-HDFS-7240.000.patch, > HDFS-13324-HDFS-7240.001.patch, HDFS-13324-HDFS-7240.002.patch, > HDFS-13324-HDFS-7240.003.patch > > > We have removed the dependency of DatanodeID in HDSL/Ozone and there is no > need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and > InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13414) Ozone: Update existing Ozone documentation according to the recent changes
[ https://issues.apache.org/jira/browse/HDFS-13414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434253#comment-16434253 ] Mukul Kumar Singh commented on HDFS-13414: -- Thanks for the updated patch [~elek]. The patch looks really good to me. Some minor comments on the updated patch. 1) OzoneCommandShell.md:28, should the filename be ozone-site.xml ? 2) OzoneGettingStarted.md:26, hdss -> hdds 3) OzoneGettingStarted.md:33 & 35 -> should we remove "checkout trunk" from this ? 4) OzoneGettingStarted.md:178 -> After HDFS-13395, we should also have a section on "hdds.datanode.plugins" as well. 5) OzoneGettingStarted.md:348, extra line at the end. > Ozone: Update existing Ozone documentation according to the recent changes > -- > > Key: HDFS-13414 > URL: https://issues.apache.org/jira/browse/HDFS-13414 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Elek, Marton >Assignee: Elek, Marton >Priority: Minor > Attachments: HDFS-13414-HDFS-7240.001.patch, > HDFS-13414-HDFS-7240.002.patch > > > 1. Datanode port has been changed > 2. remove the references to the branch (prepare to merge) > 3. CLI commands are changed (eg. ozone scm) -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13311) RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows
[ https://issues.apache.org/jira/browse/HDFS-13311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13311: --- Status: Patch Available (was: Open) > RBF: TestRouterAdminCLI#testCreateInvalidEntry fails on Windows > --- > > Key: HDFS-13311 > URL: https://issues.apache.org/jira/browse/HDFS-13311 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > Labels: RBF, windows > Attachments: HDFS-13311.000.patch > > > The Windows runs show that TestRouterAdminCLI#testCreateInvalidEntry fail > with NPE: > {code} > [ERROR] > testCreateInvalidEntry(org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI) > Time elapsed: 0.008 s <<< ERROR! > java.lang.NullPointerException > at > org.apache.hadoop.util.GenericOptionsParser.preProcessForWindows(GenericOptionsParser.java:529) > at > org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:568) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:174) > at > org.apache.hadoop.util.GenericOptionsParser.(GenericOptionsParser.java:156) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at > org.apache.hadoop.hdfs.server.federation.router.TestRouterAdminCLI.testCreateInvalidEntry(TestRouterAdminCLI.java:444) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-13245) RBF: State store DBMS implementation
[ https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13245: --- Comment: was deleted (was: [~yiran] Look forward your patch.) > RBF: State store DBMS implementation > > > Key: HDFS-13245 > URL: https://issues.apache.org/jira/browse/HDFS-13245 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: maobaolong >Assignee: Yiran Wu >Priority: Major > > Add a DBMS implementation for the State Store. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13245) RBF: State store DBMS implementation
[ https://issues.apache.org/jira/browse/HDFS-13245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434245#comment-16434245 ] Íñigo Goiri commented on HDFS-13245: Thanks [~yiran] for taking this. Try to align as much as possible with YARN-3663. > RBF: State store DBMS implementation > > > Key: HDFS-13245 > URL: https://issues.apache.org/jira/browse/HDFS-13245 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: hdfs >Reporter: maobaolong >Assignee: Yiran Wu >Priority: Major > > Add a DBMS implementation for the State Store. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster
[ https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434242#comment-16434242 ] Íñigo Goiri commented on HDFS-13045: Thanks [~ywskycn] for the report/review/commit. > RBF: Improve error message returned from subcluster > --- > > Key: HDFS-13045 > URL: https://issues.apache.org/jira/browse/HDFS-13045 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, > HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch > > > Currently, Router directly returns exception response from subcluster to > client, which may not have the correct error message, especially when the > error message containing a path. > One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If > user1 does a chown operation on "/a/b", and he doesn't have corresponding > privilege, currently the error msg looks like "Permission denied. user=user1 > is not the owner of inode=/c/d", which may confuse user. Would be better to > reverse the path back to original mount path. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13388) RequestHedgingProxyProvider calls multiple configured NNs all the time
[ https://issues.apache.org/jira/browse/HDFS-13388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434239#comment-16434239 ] Íñigo Goiri commented on HDFS-13388: [~LiJinglun], this would be an addendum on top of what we have already committed, right? I'm in between three options: * Revert the current one and do a full patch * Committing an addendum * Opening a new JIRA for the fix > RequestHedgingProxyProvider calls multiple configured NNs all the time > -- > > Key: HDFS-13388 > URL: https://issues.apache.org/jira/browse/HDFS-13388 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs-client >Reporter: Jinglun >Assignee: Jinglun >Priority: Major > Fix For: 2.10.0, 3.2.0, 3.1.1 > > Attachments: HADOOP-13388.0001.patch, HADOOP-13388.0002.patch, > HADOOP-13388.0003.patch, HADOOP-13388.0004.patch, HADOOP-13388.0005.patch, > HADOOP-13388.0006.patch, HADOOP-13388.0007.patch > > > In HDFS-7858 RequestHedgingProxyProvider was designed to "first > simultaneously call multiple configured NNs to decide which is the active > Namenode and then for subsequent calls it will invoke the previously > successful NN ." But the current code call multiple configured NNs every time > even when we already got the successful NN. > That's because in RetryInvocationHandler.java, ProxyDescriptor's member > proxyInfo is assigned only when it is constructed or when failover occurs. > RequestHedgingProxyProvider.currentUsedProxy is null in both cases, so the > only proxy we can get is always a dynamic proxy handled by > RequestHedgingInvocationHandler.class. RequestHedgingInvocationHandler.class > handles invoked method by calling multiple configured NNs. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434236#comment-16434236 ] Íñigo Goiri commented on HDFS-13428: Thanks [~belugabehr] for the patch. Can you point to some link for the speed for iterating {{ArrayList}} vs {{LinkedList}}? I believe that would be the case specially given it will allocate a continuous block of memory but just for reference. Out of curiosity, your changes are mostly optimizations in using data structures in kind of random places across HDFS; are you doing some static analysis through the code? In any case, [^HDFS-13428.1.patch] LGTM. +1 I'll commit this during the day. > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434237#comment-16434237 ] genericqa commented on HDFS-13418: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 28m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 55s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 12s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 59s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 35s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 77m 27s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}143m 32s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting | | | hadoop.hdfs.TestEncryptionZonesWithKMS | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8620d2b | | JIRA Issue | HDFS-13418 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918574/HDFS-13418.004.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 0991277b2f33 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / eefe2a1 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/23884/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23884/testReport/ | | Max. process+thread count | 3694 (vs. ulimit of 1) | | modules | C: ha
[jira] [Commented] (HDFS-13418) NetworkTopology should be configurable when enable DFSNetworkTopology
[ https://issues.apache.org/jira/browse/HDFS-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434231#comment-16434231 ] Chen Liang commented on HDFS-13418: --- Thans [~Tao Jie] and [~linyiqun] for the followup! v003 patch LGTM, +1. > NetworkTopology should be configurable when enable DFSNetworkTopology > -- > > Key: HDFS-13418 > URL: https://issues.apache.org/jira/browse/HDFS-13418 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.0.1 >Reporter: Tao Jie >Assignee: Tao Jie >Priority: Major > Attachments: HDFS-13418.001.patch, HDFS-13418.002.patch, > HDFS-13418.003.patch, HDFS-13418.004.patch > > > In HDFS-11530 we introduce DFSNetworkTopology and in HDFS-11998 we set > DFSNetworkTopology as the default implementation. > We still have {{net.topology.impl=org.apache.hadoop.net.NetworkTopology}} in > core-site.default. Actually this property does not effect once > {{dfs.use.dfs.network.topology}} is true. > in {{DatanodeManager}},networkTopology is initialized as > {code} > if (useDfsNetworkTopology) { > networktopology = DFSNetworkTopology.getInstance(conf); > } else { > networktopology = NetworkTopology.getInstance(conf); > } > {code} > I think we should still make the NetworkTopology configurable rather than > hard code the implementation since we may need another NetworkTopology impl. > I am not sure if there is other consideration. Any thought? [~vagarychen] > [~linyiqun] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13428: --- Environment: (was: Replace {{LinkedList}} with {{ArrayList}} implementation in the StateStoreFileImpl class. This is especially advantageous because we can pre-allocate the internal array before a copy occurs. {{ArrayList}} is faster for iterations and requires less memory than {{LinkedList}}. {code:java} protected List getChildren(String path) { List ret = new LinkedList<>(); File dir = new File(path); File[] files = dir.listFiles(); if (files != null) { for (File file : files) { String filename = file.getName(); ret.add(filename); } } return ret; }{code}) > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13427) Fix the section titles of transparent encryption document
[ https://issues.apache.org/jira/browse/HDFS-13427?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434220#comment-16434220 ] Arpit Agarwal commented on HDFS-13427: -- +1 > Fix the section titles of transparent encryption document > - > > Key: HDFS-13427 > URL: https://issues.apache.org/jira/browse/HDFS-13427 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation >Reporter: Akira Ajisaka >Assignee: Akira Ajisaka >Priority: Minor > Attachments: HDFS-13427.01.patch > > > {noformat} > The `crypto` command before Hadoop 2.8.0 does not provision the `.Trash` > directory automatically. If an encryption zone is created before Hadoop > 2.8.0, and then the cluster is upgraded to Hadoop 2.8.0 or above, the trash > directory can be provisioned using `-provisionTrash` option (e.g., `hdfs > crypto -provisionTrash -path /zone`). > Attack vectors > -- > {noformat} > The long sentence starts with 'The crypto' wrongly become the title. We need > to add a blank line between the sentence and 'Attack vectors' to fix this. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13428: --- Description: Replace {{LinkedList}} with {{ArrayList}} implementation in the StateStoreFileImpl class. This is especially advantageous because we can pre-allocate the internal array before a copy occurs. {{ArrayList}} is faster for iterations and requires less memory than {{LinkedList}}. {code:java} protected List getChildren(String path) { List ret = new LinkedList<>(); File dir = new File(path); File[] files = dir.listFiles(); if (files != null) { for (File file : files) { String filename = file.getName(); ret.add(filename); } } return ret; } {code} > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation >Affects Versions: 3.0.1 >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > > Replace {{LinkedList}} with {{ArrayList}} implementation in the > StateStoreFileImpl class. This is especially advantageous because we can > pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13428) RBF: Remove LinkedList From StateStoreFileImpl.java
[ https://issues.apache.org/jira/browse/HDFS-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13428: --- Summary: RBF: Remove LinkedList From StateStoreFileImpl.java (was: Remove LinkedList From StateStoreFileImpl.java) > RBF: Remove LinkedList From StateStoreFileImpl.java > --- > > Key: HDFS-13428 > URL: https://issues.apache.org/jira/browse/HDFS-13428 > Project: Hadoop HDFS > Issue Type: Improvement > Components: federation >Affects Versions: 3.0.1 > Environment: Replace {{LinkedList}} with {{ArrayList}} implementation > in the StateStoreFileImpl class. This is especially advantageous because we > can pre-allocate the internal array before a copy occurs. {{ArrayList}} is > faster for iterations and requires less memory than {{LinkedList}}. > > {code:java} > protected List getChildren(String path) { > List ret = new LinkedList<>(); > File dir = new File(path); > File[] files = dir.listFiles(); > if (files != null) { > for (File file : files) { > String filename = file.getName(); > ret.add(filename); > } > } > return ret; > }{code} >Reporter: BELUGA BEHR >Assignee: BELUGA BEHR >Priority: Trivial > Attachments: HDFS-13428.1.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13416) Ozone: TestNodeManager tests fail
[ https://issues.apache.org/jira/browse/HDFS-13416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434201#comment-16434201 ] Bharat Viswanadham commented on HDFS-13416: --- Thank You [~nandakumar131] for review and committing the changes. > Ozone: TestNodeManager tests fail > - > > Key: HDFS-13416 > URL: https://issues.apache.org/jira/browse/HDFS-13416 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Attachments: HDFS-13416-HDFS-7240.00.patch, > HDFS-13416-HDFS-7240.01.patch > > > java.lang.IllegalArgumentException: Invalid UUID string: h0 > at java.util.UUID.fromString(UUID.java:194) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:68) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails.(DatanodeDetails.java:36) > at > org.apache.hadoop.hdds.protocol.DatanodeDetails$Builder.build(DatanodeDetails.java:416) > at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:95) > at org.apache.hadoop.hdds.scm.TestUtils.getDatanodeDetails(TestUtils.java:48) > at > org.apache.hadoop.hdds.scm.node.TestNodeManager.createNodeSet(TestNodeManager.java:719) > at > org.apache.hadoop.hdds.scm.node.TestNodeManager.testScmLogsHeartbeatFlooding(TestNodeManager.java:913) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) > at > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) > at java.lang.reflect.Method.invoke(Method.java:498) > at > org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) > at > org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) > at > org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) > at > org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) > at > org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:168) > at org.junit.rules.RunRules.evaluate(RunRules.java:20) > at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) > at > org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) > at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) > at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) > at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) > at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) > at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) > at > org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) > at org.junit.runners.ParentRunner.run(ParentRunner.java:309) > at org.junit.runner.JUnitCore.run(JUnitCore.java:160) > at > com.intellij.junit4.JUnit4IdeaTestRunner.startRunnerWithArgs(JUnit4IdeaTestRunner.java:68) > at > com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) > at > com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) > at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) > > This is happening after this change HDFS-13300 > cc [~nandakumar131] -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13045) RBF: Improve error message returned from subcluster
[ https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Yan updated HDFS-13045: --- Resolution: Fixed Fix Version/s: 3.0.4 2.9.2 3.1.1 3.2.0 2.10.0 Status: Resolved (was: Patch Available) Committed to trunk, branch-3.1, branch-3.0, branch-2, and branch-2.9. Thanks for the contributions [~elgoiri]. > RBF: Improve error message returned from subcluster > --- > > Key: HDFS-13045 > URL: https://issues.apache.org/jira/browse/HDFS-13045 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Íñigo Goiri >Priority: Minor > Fix For: 2.10.0, 3.2.0, 3.1.1, 2.9.2, 3.0.4 > > Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, > HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch > > > Currently, Router directly returns exception response from subcluster to > client, which may not have the correct error message, especially when the > error message containing a path. > One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If > user1 does a chown operation on "/a/b", and he doesn't have corresponding > privilege, currently the error msg looks like "Permission denied. user=user1 > is not the owner of inode=/c/d", which may confuse user. Would be better to > reverse the path back to original mount path. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13045) RBF: Improve error message returned from subcluster
[ https://issues.apache.org/jira/browse/HDFS-13045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434081#comment-16434081 ] Hudson commented on HDFS-13045: --- FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #13971 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13971/]) HDFS-13045. RBF: Improve error message returned from subcluster. (weiy: rev 0c93d43f3d624a4fd17b3b050443d9e7e20d4f0a) * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/MountTable.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RemoteLocationContext.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterAdmin.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/store/records/impl/pb/MountTablePBImpl.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/FederationNamespaceInfo.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/RemoteLocation.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/MockResolver.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/store/records/TestMountTable.java * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/resolver/MountTableResolver.java > RBF: Improve error message returned from subcluster > --- > > Key: HDFS-13045 > URL: https://issues.apache.org/jira/browse/HDFS-13045 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Íñigo Goiri >Priority: Minor > Attachments: HDFS-13045.000.patch, HDFS-13045.001.patch, > HDFS-13045.002.patch, HDFS-13045.003.patch, HDFS-13045.004.patch > > > Currently, Router directly returns exception response from subcluster to > client, which may not have the correct error message, especially when the > error message containing a path. > One example, we have a mount path "/a/b" mapped to subclusterA's "/c/d". If > user1 does a chown operation on "/a/b", and he doesn't have corresponding > privilege, currently the error msg looks like "Permission denied. user=user1 > is not the owner of inode=/c/d", which may confuse user. Would be better to > reverse the path back to original mount path. > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13425) Ozone: Clean-up of ozone related change from hadoop-common-project
[ https://issues.apache.org/jira/browse/HDFS-13425?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16434079#comment-16434079 ] genericqa commented on HDFS-13425: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 42s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 37s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 47s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 52s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 27s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 58s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 41s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 41s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 44s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch generated 1 new + 240 unchanged - 1 fixed = 241 total (was 241) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 1s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 5s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 39s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 42s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 38s{color} | {color:red} The patch generated 5 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}117m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:b78c94f | | JIRA Issue | HDFS-13425 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12918546/HDFS-13425-HDFS-7240.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux c3e5640ba4dc 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / ea85801 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_162 | | findbugs | v3.1.0-RC1 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23882/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23882/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/23882/artifact/out/patch-asflicense-problems.txt