[jira] [Commented] (HDFS-13230) RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns
[ https://issues.apache.org/jira/browse/HDFS-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409082#comment-16409082 ] Íñigo Goiri commented on HDFS-13230: [~leftnoteasy], I mentioned the issue in this JIRA and tried to fix it. However, as mentioned in HDFS-13232 fixing it is kind of messy. If you know a safe way to fix it, let me know. > RBF: ConnectionManager's cleanup task will compare each pool's own active > conns with its total conns > > > Key: HDFS-13230 > URL: https://issues.apache.org/jira/browse/HDFS-13230 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Chao Sun >Priority: Minor > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2 > > Attachments: HDFS-13230.000.patch, HDFS-13230.001.patch > > > In the cleanup task: > {code:java} > long timeSinceLastActive = Time.now() - pool.getLastActiveTime(); > int total = pool.getNumConnections(); > int active = getNumActiveConnections(); > if (timeSinceLastActive > connectionCleanupPeriodMs || > {code} > the 3rd line should be pool.getNumActiveConnections() > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409077#comment-16409077 ] Íñigo Goiri commented on HDFS-13318: +1 > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > Attachments: HDFS-13318.001.patch > > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7877) [Umbrella] Support maintenance state for datanodes
[ https://issues.apache.org/jira/browse/HDFS-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-7877: - Summary: [Umbrella] Support maintenance state for datanodes (was: Support maintenance state for datanodes) > [Umbrella] Support maintenance state for datanodes > -- > > Key: HDFS-7877 > URL: https://issues.apache.org/jira/browse/HDFS-7877 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, namenode >Reporter: Ming Ma >Assignee: Ming Ma >Priority: Major > Fix For: 2.9.0, 3.0.0-beta1, 3.1.0 > > Attachments: HDFS-7877-2.patch, HDFS-7877.patch, > Supportmaintenancestatefordatanodes-2.pdf, > Supportmaintenancestatefordatanodes.pdf > > > This requirement came up during the design for HDFS-7541. Given this feature > is mostly independent of upgrade domain feature, it is better to track it > under a separate jira. The design and draft patch will be available soon. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409062#comment-16409062 ] Wangda Tan commented on HDFS-12884: --- [~shv], I just moved this to 3.1.1 since we're working on 3.1.0 on branch-3.1.0 which doesn't have this Jira. Please let me know ur thoughts. > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Fix For: 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wangda Tan updated HDFS-12884: -- Fix Version/s: (was: 3.1.0) 3.1.1 > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Fix For: 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0, 3.1.1 > > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13230) RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns
[ https://issues.apache.org/jira/browse/HDFS-13230?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409060#comment-16409060 ] Wangda Tan commented on HDFS-13230: --- [~elgoiri], while doing Jira scan, I found the committed message doesn't match this Jira ID: {code:java} commit 0c2b969e0161a068bf9ae013c4b95508dfb90a8a Author: Inigo GoiriDate: Thu Mar 8 09:32:05 2018 -0800 HDFS-13232. RBF: ConnectionManager's cleanup task will compare each pool's own active conns with its total conns. Contributed by Chao Sun.{code} Posted it here so we can track this in the future. > RBF: ConnectionManager's cleanup task will compare each pool's own active > conns with its total conns > > > Key: HDFS-13230 > URL: https://issues.apache.org/jira/browse/HDFS-13230 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Chao Sun >Priority: Minor > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.2 > > Attachments: HDFS-13230.000.patch, HDFS-13230.001.patch > > > In the cleanup task: > {code:java} > long timeSinceLastActive = Time.now() - pool.getLastActiveTime(); > int total = pool.getNumConnections(); > int active = getNumActiveConnections(); > if (timeSinceLastActive > connectionCleanupPeriodMs || > {code} > the 3rd line should be pool.getNumActiveConnections() > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409055#comment-16409055 ] genericqa commented on HDFS-13300: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 25s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 44 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 25s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 55s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 23s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 19s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 25s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 1s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 31s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 4s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 26s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} |
[jira] [Commented] (HDFS-13175) Add more information for checking argument in DiskBalancerVolume
[ https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409053#comment-16409053 ] Anu Engineer commented on HDFS-13175: - [~yzhangal] Thanks for the comment and sharing the JIRA. That is precisely a situation where this assertion can trigger. However, I do agree that diskBalancer can be more graceful than fail via an assertion. If the capacity is lesser than used space, the basic notions of space allocation are violated. So it is appropriate that diskBalancer fails. > Add more information for checking argument in DiskBalancerVolume > > > Key: HDFS-13175 > URL: https://issues.apache.org/jira/browse/HDFS-13175 > Project: Hadoop HDFS > Issue Type: Improvement > Components: diskbalancer >Affects Versions: 3.0.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Fix For: 3.1.0, 3.0.2 > > Attachments: HDFS-13175.00.patch, HDFS-13175.01.patch > > > We have seen the following stack in production > {code} > Exception in thread "main" java.lang.IllegalArgumentException > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:72) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107) > {code} > raised from > {code} > public void setUsed(long dfsUsedSpace) { > Preconditions.checkArgument(dfsUsedSpace < this.getCapacity()); > this.used = dfsUsedSpace; > } > {code} > However, the datanode reports at the very moment were not captured. We should > add more information into the stack trace to better diagnose the issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13175) Add more information for checking argument in DiskBalancerVolume
[ https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409044#comment-16409044 ] Yongjun Zhang edited comment on HDFS-13175 at 3/22/18 4:51 AM: --- Thanks for the work here [~eddyxu] and [~anu]. Saw https://issues.apache.org/jira/browse/HDFS-13034?focusedCommentId=16331614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16331614 wonder if it's related. was (Author: yzhangal): Thanks for the work here [~eddyxu]. Saw https://issues.apache.org/jira/browse/HDFS-13034?focusedCommentId=16331614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16331614 wonder if it's related. > Add more information for checking argument in DiskBalancerVolume > > > Key: HDFS-13175 > URL: https://issues.apache.org/jira/browse/HDFS-13175 > Project: Hadoop HDFS > Issue Type: Improvement > Components: diskbalancer >Affects Versions: 3.0.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Fix For: 3.1.0, 3.0.2 > > Attachments: HDFS-13175.00.patch, HDFS-13175.01.patch > > > We have seen the following stack in production > {code} > Exception in thread "main" java.lang.IllegalArgumentException > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:72) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107) > {code} > raised from > {code} > public void setUsed(long dfsUsedSpace) { > Preconditions.checkArgument(dfsUsedSpace < this.getCapacity()); > this.used = dfsUsedSpace; > } > {code} > However, the datanode reports at the very moment were not captured. We should > add more information into the stack trace to better diagnose the issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13175) Add more information for checking argument in DiskBalancerVolume
[ https://issues.apache.org/jira/browse/HDFS-13175?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409044#comment-16409044 ] Yongjun Zhang commented on HDFS-13175: -- Thanks for the work here [~eddyxu]. Saw https://issues.apache.org/jira/browse/HDFS-13034?focusedCommentId=16331614=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16331614 wonder if it's related. > Add more information for checking argument in DiskBalancerVolume > > > Key: HDFS-13175 > URL: https://issues.apache.org/jira/browse/HDFS-13175 > Project: Hadoop HDFS > Issue Type: Improvement > Components: diskbalancer >Affects Versions: 3.0.0 >Reporter: Lei (Eddy) Xu >Assignee: Lei (Eddy) Xu >Priority: Minor > Fix For: 3.1.0, 3.0.2 > > Attachments: HDFS-13175.00.patch, HDFS-13175.01.patch > > > We have seen the following stack in production > {code} > Exception in thread "main" java.lang.IllegalArgumentException > at > com.google.common.base.Preconditions.checkArgument(Preconditions.java:72) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerVolume.setUsed(DiskBalancerVolume.java:268) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getVolumeInfoFromStorageReports(DBNameNodeConnector.java:141) > at > org.apache.hadoop.hdfs.server.diskbalancer.connectors.DBNameNodeConnector.getNodes(DBNameNodeConnector.java:90) > at > org.apache.hadoop.hdfs.server.diskbalancer.datamodel.DiskBalancerCluster.readClusterInfo(DiskBalancerCluster.java:132) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.Command.readClusterInfo(Command.java:123) > at > org.apache.hadoop.hdfs.server.diskbalancer.command.PlanCommand.execute(PlanCommand.java:107) > {code} > raised from > {code} > public void setUsed(long dfsUsedSpace) { > Preconditions.checkArgument(dfsUsedSpace < this.getCapacity()); > this.used = dfsUsedSpace; > } > {code} > However, the datanode reports at the very moment were not captured. We should > add more information into the stack trace to better diagnose the issue. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13176) WebHdfs file path gets truncated when having semicolon (;) inside
[ https://issues.apache.org/jira/browse/HDFS-13176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409035#comment-16409035 ] Yongjun Zhang commented on HDFS-13176: -- Hi [~zvenczel] and [~mackrorysd], Thanks for your work here. It seems reasonable to get the fix to branch-2 etc as Sean indcated. Would you provide a patch Zsolt? thanks. > WebHdfs file path gets truncated when having semicolon (;) inside > - > > Key: HDFS-13176 > URL: https://issues.apache.org/jira/browse/HDFS-13176 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0 >Reporter: Zsolt Venczel >Assignee: Zsolt Venczel >Priority: Major > Attachments: HDFS-13176-branch-2.01.patch, > HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, > HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.03.patch, > HDFS-13176-branch-2.03.patch, HDFS-13176-branch-2.04.patch, > HDFS-13176.01.patch, HDFS-13176.02.patch, > TestWebHdfsUrl.testWebHdfsSpecialCharacterFile.patch > > > Find attached a patch having a test case that tries to reproduce the problem. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16409009#comment-16409009 ] genericqa commented on HDFS-12792: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 35s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 27 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 13s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 50s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 40s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 20s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 57s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 49m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-12792 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915558/HDFS-12792.010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle xml | | uname | Linux 145c763332f1 4.4.0-64-generic #85-Ubuntu SMP Mon Feb 20 11:50:30 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8d898ab | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/23617/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/23617/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results |
[jira] [Commented] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408999#comment-16408999 ] genericqa commented on HDFS-13318: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 44s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 21s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 48s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs-rbf generated 0 new + 0 unchanged - 3 fixed = 0 total (was 3) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 8s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 43s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13318 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915563/HDFS-13318.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 11e28f760339 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 14:43:09 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 8d898ab | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/23615/artifact/out/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-rbf-warnings.html | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23615/testReport/ | | Max. process+thread count | 938 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U:
[jira] [Updated] (HDFS-13218) Log audit event only used last EC policy name when add multiple policies from file
[ https://issues.apache.org/jira/browse/HDFS-13218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli updated HDFS-13218: --- Fix Version/s: (was: 3.0.1) (was: 3.1.0) Removing 3.1.0 fix-version from all JIRAs which are Invalid / Won't Fix / Duplicate / Cannot Reproduce. > Log audit event only used last EC policy name when add multiple policies from > file > --- > > Key: HDFS-13218 > URL: https://issues.apache.org/jira/browse/HDFS-13218 > Project: Hadoop HDFS > Issue Type: Improvement > Components: erasure-coding >Affects Versions: 3.1.0 >Reporter: liaoyuxiangqin >Priority: Major > > When i read the addErasureCodingPolicies() of FSNamesystem class in namenode, > i found the following code only used last ec policy name for logAuditEvent, > i think this audit log can't track whole policies for the add multiple > erasure coding policies to the ErasureCodingPolicyManager. Thanks. > {code:java|title=FSNamesystem.java|borderStyle=solid} > try { > checkOperation(OperationCategory.WRITE); > checkNameNodeSafeMode("Cannot add erasure coding policy"); > for (ErasureCodingPolicy policy : policies) { > try { > ErasureCodingPolicy newPolicy = > FSDirErasureCodingOp.addErasureCodingPolicy(this, policy, > logRetryCache); > addECPolicyName = newPolicy.getName(); > responses.add(new AddErasureCodingPolicyResponse(newPolicy)); > } catch (HadoopIllegalArgumentException e) { > responses.add(new AddErasureCodingPolicyResponse(policy, e)); > } > } > success = true; > return responses.toArray(new AddErasureCodingPolicyResponse[0]); > } finally { > writeUnlock(operationName); > if (success) { > getEditLog().logSync(); > } > logAuditEvent(success, operationName,addECPolicyName, null, null); > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-10675) [READ] Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli resolved HDFS-10675. Resolution: Fixed > [READ] Datanode support to read from external stores. > - > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, > HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch, > HDFS-10675-HDFS-9806.006.patch, HDFS-10675-HDFS-9806.007.patch, > HDFS-10675-HDFS-9806.008.patch, HDFS-10675-HDFS-9806.009.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13320) Ozone:Support for MicrobenchMarking Tool
[ https://issues.apache.org/jira/browse/HDFS-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408983#comment-16408983 ] Anu Engineer commented on HDFS-13320: - [~shashikant] Thank you for the patch. When I build and try to run this tool, I get this following error. Can you please investigate? {noformat} ./hadoop-3.2.0-SNAPSHOT/bin/oz genesis -h Exception in thread "main" java.lang.RuntimeException: ERROR: Unable to find the resource: /META-INF/BenchmarkList at org.openjdk.jmh.runner.AbstractResourceReader.getReaders(AbstractResourceReader.java:98) at org.openjdk.jmh.runner.BenchmarkList.find(BenchmarkList.java:122) at org.openjdk.jmh.runner.Runner.internalRun(Runner.java:256) at org.openjdk.jmh.runner.Runner.run(Runner.java:206) at org.apache.hadoop.ozone.genesis.Genesis.main(Genesis.java:50) {noformat} > Ozone:Support for MicrobenchMarking Tool > > > Key: HDFS-13320 > URL: https://issues.apache.org/jira/browse/HDFS-13320 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Attachments: HDFS-13320-HDFS-7240.001.patch > > > This Jira proposes to add a micro benchmarking tool called Genesis which > executes a set of HDSL/Ozone benchmarks. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-11190) [READ] Namenode support for data stored in external stores.
[ https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli resolved HDFS-11190. Resolution: Fixed > [READ] Namenode support for data stored in external stores. > --- > > Key: HDFS-11190 > URL: https://issues.apache.org/jira/browse/HDFS-11190 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-11190-HDFS-9806.001.patch, > HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch, > HDFS-11190-HDFS-9806.004.patch > > > The goal of this JIRA is to enable the Namenode to know about blocks that are > in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-10675) [READ] Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli reopened HDFS-10675: Resolving as Fixed instead of as Resolved per our conventions. > [READ] Datanode support to read from external stores. > - > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch, > HDFS-10675-HDFS-9806.004.patch, HDFS-10675-HDFS-9806.005.patch, > HDFS-10675-HDFS-9806.006.patch, HDFS-10675-HDFS-9806.007.patch, > HDFS-10675-HDFS-9806.008.patch, HDFS-10675-HDFS-9806.009.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Reopened] (HDFS-11190) [READ] Namenode support for data stored in external stores.
[ https://issues.apache.org/jira/browse/HDFS-11190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vinod Kumar Vavilapalli reopened HDFS-11190: Resolving as Fixed instead of as Resolved per our conventions. > [READ] Namenode support for data stored in external stores. > --- > > Key: HDFS-11190 > URL: https://issues.apache.org/jira/browse/HDFS-11190 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti >Assignee: Virajith Jalaparti >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-11190-HDFS-9806.001.patch, > HDFS-11190-HDFS-9806.002.patch, HDFS-11190-HDFS-9806.003.patch, > HDFS-11190-HDFS-9806.004.patch > > > The goal of this JIRA is to enable the Namenode to know about blocks that are > in {{PROVIDED}} stores and are not necessarily stored on any Datanodes. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408958#comment-16408958 ] Hudson commented on HDFS-12884: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13865 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13865/]) HDFS-12884. BlockUnderConstructionFeature.truncateBlock should be of (shv: rev 8d898ab25f1c2032a07c9bbd96ba3d0c4eb5be87) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirTruncateOp.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockUnderConstructionFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FileDiffList.java > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0 > > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Anu Engineer updated HDFS-13319: Resolution: Fixed Hadoop Flags: Reviewed Status: Resolved (was: Patch Available) [~msingh] Thanks for the contribution. I have committed this to the feature branch. > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408936#comment-16408936 ] genericqa commented on HDFS-13300: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 44s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 44 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 2m 14s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 24s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 46s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 26s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 27s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 29s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 28s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 9s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 25s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} objectstore-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 28s{color} | {color:red} ozone-manager in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 24s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s{color} | {color:red} client in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s{color} | {color:red} container-service in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 29s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} |
[jira] [Commented] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408917#comment-16408917 ] Anu Engineer commented on HDFS-13319: - [~msingh] Thanks for the patch. [~elek] Thanks for the review. I will commit this now. > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Konstantin Shvachko updated HDFS-12884: --- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 3.0.2 2.7.6 2.8.4 2.9.1 2.10.0 3.1.0 Status: Resolved (was: Patch Available) I just committed this to a bunch of branches. Thank you [~candychencan]. {code} 13f74a3..5275284 branch-2 -> branch-2 fec04c4..37403e1 branch-2.7 -> branch-2.7 d7c91f6..17735d8 branch-2.8 -> branch-2.8 b21b834..d1a89c4 branch-2.9 -> branch-2.9 8f60b50..987d90a branch-3.0 -> branch-3.0 5d4b2c3..21db4e9 branch-3.1 -> branch-3.1 5aa7052..8d898ab trunk -> trunk {code} > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 2.8.4, 2.7.6, 3.0.2, 3.2.0 > > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408875#comment-16408875 ] Íñigo Goiri commented on HDFS-13326: Internally, I usually remove the entries for such cases but yes it makes sense to have an explicit update option which allows this. The APIs already allow that so it should be a matter of tweaking the CLI. > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408857#comment-16408857 ] Wei Yan commented on HDFS-13326: {quote}-add option updates also. {quote} yes, I didn't scroll down to the see detailed implementation... But one interesting thing here, we cannot "switching a mount entry from readonly to non-readonly", or "removing a target path from a mount entry". For readonly, currently it only supports updating from falst to true. {code:java} if (readonly) { existingEntry.setReadOnly(true); }{code} For target path, it will always tries to add new target locations. {code:java} for (String nsId : nss) { if (!existingEntry.addDestination(nsId, dest)) { System.err.println("Cannot add destination at " + nsId + " " + dest); } }{code} > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13098) RBF: Datanodes interacting with Routers
[ https://issues.apache.org/jira/browse/HDFS-13098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408823#comment-16408823 ] Íñigo Goiri commented on HDFS-13098: To do some of the proposed ideas, we could leverage some of the mechanisms that HDFS-13312 would require. > RBF: Datanodes interacting with Routers > --- > > Key: HDFS-13098 > URL: https://issues.apache.org/jira/browse/HDFS-13098 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Priority: Major > > Datanodes talk to particular Namenodes. We could use the Router > infrastructure for the Datanodes to register/heartbeating into them and the > Routers would forward this to particular Namenodes. This would make the > assignment of Datanodes to subclusters potentially more dynamic. > The implementation would potentially include: > * Router to implement part of DatanodeProtocol > * Forwarding DN messages into Routers > * Policies to assign datanodes to subclusters > * Datanodes to make blockpool configuration dynamic -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13312) NameNode High Availability ZooKeeper based discovery rather than explicit nn1,nn2 configs
[ https://issues.apache.org/jira/browse/HDFS-13312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408818#comment-16408818 ] Íñigo Goiri commented on HDFS-13312: In HDFS-13098, we would like to support the Datanodes contacting the Routers to discover the NNs instead of explicitly setting up the nameservice. I think that the discovery described in this JIRA may target the clients and not the workers (i.e., DNs). In any case, I think adding this support is valuable and HDFS-13098 could leverage it too (similar for YARN). Can you point to the solution to do RM HA discovery? > NameNode High Availability ZooKeeper based discovery rather than explicit > nn1,nn2 configs > - > > Key: HDFS-13312 > URL: https://issues.apache.org/jira/browse/HDFS-13312 > Project: Hadoop HDFS > Issue Type: Improvement > Components: ha, hdfs, namenode, nn >Affects Versions: 2.9.1 >Reporter: Hari Sekhon >Priority: Major > > Improvement Request for HDFS NameNode HA to use ZooKeeper based dynamic > discovery rather than explicitly setting the NameNode addresses via nn1,n2 in > the configs. > One proprietary Hadoop vendor already uses ZK for Resource Manager HA > discovery - it makes sense that the open source core should do this for both > Yarn and HDFS. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon
[ https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408809#comment-16408809 ] Íñigo Goiri commented on HDFS-13204: OK, let's create a rbf.css file with the new icons and reference this from federationhealth.html. I think we can create a static folder for that and reference it; I think mvn should be able to merge the static folders from hadoop-hdfs and hadoop-hdfs-rbf. > RBF: Optimize name service safe mode icon > - > > Key: HDFS-13204 > URL: https://issues.apache.org/jira/browse/HDFS-13204 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Minor > Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, > HDFS-13204.003.patch, HDFS-13204.004.patch, Routers.png, Subclusters.png, > image-2018-02-28-18-33-09-972.png, image-2018-02-28-18-33-47-661.png, > image-2018-02-28-18-35-35-708.png > > > In federation health webpage, the safe mode icons of Subclusters and Routers > are inconsistent. > The safe mode icon of Subclusters may induce users the name service is > maintaining. > !image-2018-02-28-18-33-09-972.png! > The safe mode icon of Routers: > !image-2018-02-28-18-33-47-661.png! > In fact, if the name service is in safe mode, users can't do writing related > operations. So I think the safe mode icon in Subclusters should be modified, > which may be more reasonable. > !image-2018-02-28-18-35-35-708.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408788#comment-16408788 ] Íñigo Goiri commented on HDFS-13326: Oh sorry, I missunderstood. -add option updates also. We could make it explicit in the doc. > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408781#comment-16408781 ] Wei Yan commented on HDFS-13326: Sorry, I didn't type it clearly. I mean the "update" command in dfsrouteradmin. {code:java} Federation Admin Tools: [-add[-readonly] [-order HASH|LOCAL|RANDOM|HASH_ALL] -owner -group -mode ] [-rm ] [-ls ]{code} Currently it only supports -add and -rm, and no direct way to "update"... > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon
[ https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408779#comment-16408779 ] Wei Yan commented on HDFS-13204: [~liuhongtong], thanks for the patch. [^HDFS-13204.004.patch] LGTM, and I tried in my local env. One more item, we need to add two more css classes as mentioned in HDFS-13326. > RBF: Optimize name service safe mode icon > - > > Key: HDFS-13204 > URL: https://issues.apache.org/jira/browse/HDFS-13204 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Minor > Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, > HDFS-13204.003.patch, HDFS-13204.004.patch, Routers.png, Subclusters.png, > image-2018-02-28-18-33-09-972.png, image-2018-02-28-18-33-47-661.png, > image-2018-02-28-18-35-35-708.png > > > In federation health webpage, the safe mode icons of Subclusters and Routers > are inconsistent. > The safe mode icon of Subclusters may induce users the name service is > maintaining. > !image-2018-02-28-18-33-09-972.png! > The safe mode icon of Routers: > !image-2018-02-28-18-33-47-661.png! > In fact, if the name service is in safe mode, users can't do writing related > operations. So I think the safe mode icon in Subclusters should be modified, > which may be more reasonable. > !image-2018-02-28-18-35-35-708.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12884) BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo
[ https://issues.apache.org/jira/browse/HDFS-12884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408778#comment-16408778 ] Konstantin Shvachko commented on HDFS-12884: +1 looks good. Committing in a bit. > BlockUnderConstructionFeature.truncateBlock should be of type BlockInfo > --- > > Key: HDFS-12884 > URL: https://issues.apache.org/jira/browse/HDFS-12884 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Affects Versions: 2.7.4 >Reporter: Konstantin Shvachko >Assignee: chencan >Priority: Major > Attachments: HDFS-12884.001.patch, HDFS-12884.002.patch, > HDFS-12884.003.patch > > > {{BlockUnderConstructionFeature.truncateBlock}} type should be changed to > {{BlockInfo}} from {{Block}}. {{truncateBlock}} is always assigned as > {{BlockInfo}}, so this will avoid unnecessary casts. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408764#comment-16408764 ] Íñigo Goiri commented on HDFS-13326: {quote} One more question, do u remember any story about "update mount entry" in the RouterAdmin cmd? Didn't find it now, we forgot to add? {quote} There were a couple JIRAs related to that: * HDFS-12988 * HDFS-13212 > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408754#comment-16408754 ] Wei Yan commented on HDFS-13326: [~elgoiri], sure, let me take a pass HDFS-13204. {quote}Regarding the information, in the WebUI I think is easy to see these things but the cmd might be a little overwhelming. {quote} Agree. let's leave the cmd as it is now. One more question, do u remember any story about "update mount entry" in the RouterAdmin cmd? Didn't find it now, we forgot to add? > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails
[ https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408752#comment-16408752 ] Hudson commented on HDFS-11043: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13864 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/13864/]) HDFS-11043. TestWebHdfsTimeouts fails. Contributed by Xiaoyu Yao and (xyao: rev 389bc6d3da51f4ead4b84f1675e9631dc18f1110) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHdfsTimeouts.java > TestWebHdfsTimeouts fails > - > > Key: HDFS-11043 > URL: https://issues.apache.org/jira/browse/HDFS-11043 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-11043.000.patch, HDFS-11043.001.patch, > HDFS-11043.002.patch, org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt > > > I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at > least on trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408749#comment-16408749 ] Íñigo Goiri commented on HDFS-13326: [~ywskycn], in HDFS-13204, we are switching the icons a little. Do you mind reviewing that one to see if we should move the icons to a hadoop-hdfs-rbf css? We can then add this icon in this JIRA. Regarding the information, in the WebUI I think is easy to see these things but the cmd might be a little overwhelming. We may want to add a -d option to show the details. > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
[ https://issues.apache.org/jira/browse/HDFS-13326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408743#comment-16408743 ] Wei Yan commented on HDFS-13326: Also, currently cmd "bin/hdfs dfsrouteradmin -ls" has less information than the WebUI, like "readonly", "order", "date created"... Not sure whether we should all more such information for better debuggability. Also, we don't want to overwhelm the outputs of the cmd, for better visibility. Opinions, [~elgoiri] [~linyiqun]? > RBF: router webUI's MountTable tab doesn't show "readonly" info > --- > > Key: HDFS-13326 > URL: https://issues.apache.org/jira/browse/HDFS-13326 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Wei Yan >Assignee: Wei Yan >Priority: Minor > > So currently in the MountTable tab, the "readonly" field always show empty, > no matter whether the mount entry is readonly or not. From the code > perspective, it tries to show: > {code:java} > {code} > The federationhealth.html will load hadoop.css, however the hadoop.css > doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13326) RBF: router webUI's MountTable tab doesn't show "readonly" info
Wei Yan created HDFS-13326: -- Summary: RBF: router webUI's MountTable tab doesn't show "readonly" info Key: HDFS-13326 URL: https://issues.apache.org/jira/browse/HDFS-13326 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Wei Yan Assignee: Wei Yan So currently in the MountTable tab, the "readonly" field always show empty, no matter whether the mount entry is readonly or not. From the code perspective, it tries to show: {code:java} {code} The federationhealth.html will load hadoop.css, however the hadoop.css doesn't have classes with a prefix "dfshealth-mount-read-only". -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408652#comment-16408652 ] Ted Yu commented on HDFS-12574: --- See if you have recommendation on how the following code can be formulated using Public APIs: https://github.com/apache/hbase/blob/master/hbase-server/src/main/java/org/apache/hadoop/hbase/io/asyncfs/FanOutOneBlockAsyncDFSOutputSaslHelper.java#L229 > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408635#comment-16408635 ] Íñigo Goiri commented on HDFS-13318: [^HDFS-13318.001.patch] looks good. Let's see what Yetus says. > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > Attachments: HDFS-13318.001.patch > > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13318: --- Status: Patch Available (was: Open) > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > Attachments: HDFS-13318.001.patch > > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408634#comment-16408634 ] Ekanth S commented on HDFS-13318: - [~goiri], [~ywskycn], added a patch that resolves the FindBugs warnings. Please review and let me know. > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > Attachments: HDFS-13318.001.patch > > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekanth S updated HDFS-13318: Attachment: HDFS-13318.001.patch > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > Attachments: HDFS-13318.001.patch > > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408631#comment-16408631 ] Rushabh S Shah commented on HDFS-12574: --- {quote}Is it possible to expose decryptEncryptedDataEncryptionKey in a @InterfaceAudience.Public class so that downstream project(s) can use it ? {quote} Its an implementation detail that hbase should have nothing to do with. If you want to read an encrypted file, then just use DistributedFileSystem#open() and the returned InputStream will internally take care of it. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408628#comment-16408628 ] Ted Yu commented on HDFS-12574: --- Is it possible to expose decryptEncryptedDataEncryptionKey in a @InterfaceAudience.Public class so that downstream project(s) can use it ? thanks > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408596#comment-16408596 ] Rushabh S Shah commented on HDFS-12574: --- {noformat} @InterfaceAudience.Private @InterfaceStability.Unstable public final class HdfsKMSUtil { {noformat} As mentioned in the class annotation, it is {{private and unstable}} and most likely it will change in HDFS-12597. *Please don't* reach out through reflection in hadoop classes marked as Private and Stable. Already we had one case (HDFS-11689) where hive was reaching out to hadoop private methods. > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12574) Add CryptoInputStream to WebHdfsFileSystem read call.
[ https://issues.apache.org/jira/browse/HDFS-12574?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408590#comment-16408590 ] Ted Yu commented on HDFS-12574: --- hbase 2.0 needs to call HdfsKMSUtil#decryptEncryptedDataEncryptionKey (through reflection). Is it possible to add annotation / comment to the method so that the method is stable for future hadoop releases ? Thanks > Add CryptoInputStream to WebHdfsFileSystem read call. > - > > Key: HDFS-12574 > URL: https://issues.apache.org/jira/browse/HDFS-12574 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: encryption, kms, webhdfs >Reporter: Rushabh S Shah >Assignee: Rushabh S Shah >Priority: Major > Fix For: 3.1.0, 2.10.0, 2.9.1, 3.0.1, 2.8.4 > > Attachments: HDFS-12574.001.patch, HDFS-12574.002.patch, > HDFS-12574.003.patch, HDFS-12574.004.patch, HDFS-12574.005.patch, > HDFS-12574.006.patch, HDFS-12574.007.patch, HDFS-12574.008.patch, > HDFS-12574.009.patch, HDFS-12574.010.branch-2.8.patch, > HDFS-12574.010.branch-2.patch, HDFS-12574.010.patch, > HDFS-12574.011.branch-2.8.patch, HDFS-12574.011.branch-2.patch, > HDFS-12574.012.branch-2.8.patch, HDFS-12574.012.branch-2.patch, > HDFS-12574.012.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Attachment: HDFS-12792.010.patch > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch, > HDFS-12792.005.patch, HDFS-12792.006.patch, HDFS-12792.007.patch, > HDFS-12792.008.patch, HDFS-12792.009.patch, HDFS-12792.010.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11043) TestWebHdfsTimeouts fails
[ https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-11043: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.2.0 Status: Resolved (was: Patch Available) Thanks all for the review and discussion. I've commit the patch to the trunk. > TestWebHdfsTimeouts fails > - > > Key: HDFS-11043 > URL: https://issues.apache.org/jira/browse/HDFS-11043 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Chao Sun >Priority: Major > Fix For: 3.2.0 > > Attachments: HDFS-11043.000.patch, HDFS-11043.001.patch, > HDFS-11043.002.patch, org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt > > > I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at > least on trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes
[ https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408572#comment-16408572 ] genericqa commented on HDFS-13055: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 1m 3s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 26s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 2m 18s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 12m 54s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 32s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 34s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 98m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 23s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}166m 42s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailureWithRandomECPolicy | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13055 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915525/HDFS-13055.010.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 698f3d4d2706 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6c63cc7 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_151 | | findbugs | v3.1.0-RC1 | | unit |
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Attachment: HDFS-12792.009.patch > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch, > HDFS-12792.005.patch, HDFS-12792.006.patch, HDFS-12792.007.patch, > HDFS-12792.008.patch, HDFS-12792.009.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408553#comment-16408553 ] Xiaoyu Yao commented on HDFS-13300: --- Thanks [~nandakumar131] for the patch. It looks good to me overall. Here are a few comments: Hdsl.proto Line 34-36: should we make ipAddress/hostName/infoPort required? StorageContainerDatanodeProtocol.proto Line 35-40: unused imports can be removed from proto file (IntelliJ won't report unused protobuf import) (Let's comment out line 149 for the only reference of hdfs.StorageType as it is not being used in the code) Then we can completely remove the hdfs related proto dependencies. {code} //optionalhadoop.hdfs.StorageTypeProtostorageType=5[default=DISK]; {code} CommandQueue.java Line 43: I like the idea of minimize map key size to reduce ozone memory usage. We should check other places to unnecessary usage of DatanodeDetails as key. Line 109: We prefer to have shorter name like you mentioned here. DatanodeDetails.Uuid is better than DatanodeDetails.datanodeUuid Can you change that in Hdsl.proto.DatanodeDetails and DatanodeDetails.java? This can be done in a separate JIRA considering the size of current patch. ContainerManagerImpl.java Line 124: NIT: remove the "ID" DatanodeDeletedBlockTransactions.java Line 44: this map can be keyed by UUID DatanodeDetails.java Can we directly use the protobuf generated class to avoid conversions? This can be done is a separate JIRA. HdfsUtils.java Unrelated change in hdfs can be avoided. HdslDatanodeService.java Line 80: can we define a separate SCM Exception for this? DataNodeServicePlugin.java Agree with @Marton that we should use the ServicePlugin directly. But this can be fixed in a separate JIRA. InitDatanodeState.java Line 109: agree with @Marton that we might not need to persist all as the port may change or unavailable across restart. This can be fixed later. ObjectStoreRestPlugin.java Line 78-86: can be simplified with super class's implementation of the same method. SCMNodeManager.java Line 126: can we change the nodeStats map to be keyed by UUID as well? > Ozone: Remove DatanodeID dependency from HDSL and Ozone > > > Key: HDFS-13300 > URL: https://issues.apache.org/jira/browse/HDFS-13300 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13300-HDFS-7240.000.patch, > HDFS-13300-HDFS-7240.001.patch, HDFS-13300-HDFS-7240.002.patch > > > DatanodeID has been modified to add HDSL/Ozone related information > previously. This jira is to remove DatanodeID dependency from HDSL/Ozone to > make it truly pluggable without having the need to modify DatanodeID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12452) TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs
[ https://issues.apache.org/jira/browse/HDFS-12452?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408529#comment-16408529 ] Ajay Kumar commented on HDFS-12452: --- [~xyao], thanks for taking this up. Patch 2 looks good to me.There are jenkins failures for same test at few other points. For example it fails in pre-build for HDFS-7527with below stack trace. {code}[ERROR] testSuccessiveVolumeFailures(org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting) Time elapsed: 99.528 s <<< ERROR! java.util.concurrent.TimeoutException: Timed out waiting for DN to die at org.apache.hadoop.hdfs.DFSTestUtil.waitForDatanodeDeath(DFSTestUtil.java:740) at org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:225) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47) at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12) at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44) at org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17) at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26) at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27) at org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74) {code} Although test succeeds every time in my local machine, I think forcing DN to check Disk failure and then trigger heartbeat just after inducing volume error at L214 may help. Shall we include it in current patch or track it in separate jira? {code}dns.get(2).checkDiskError(); cluster.triggerHeartbeats(); {code} > TestDataNodeVolumeFailureReporting fails in trunk Jenkins runs > -- > > Key: HDFS-12452 > URL: https://issues.apache.org/jira/browse/HDFS-12452 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Arpit Agarwal >Assignee: Xiaoyu Yao >Priority: Critical > Labels: flaky-test > Attachments: HDFS-12452.001.patch, HDFS-12452.002.patch > > > TestDataNodeVolumeFailureReporting#testSuccessiveVolumeFailures fails > frequently in Jenkins runs but it passes locally on my dev machine. > e.g. > https://builds.apache.org/job/PreCommit-HDFS-Build/21134/testReport/org.apache.hadoop.hdfs.server.datanode/TestDataNodeVolumeFailureReporting/testSuccessiveVolumeFailures/ > {code} > Error Message > test timed out after 12 milliseconds > Stacktrace > java.lang.Exception: test timed out after 12 milliseconds > at java.lang.Thread.sleep(Native Method) > at > org.apache.hadoop.hdfs.DFSTestUtil.waitReplication(DFSTestUtil.java:761) > at > org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting.testSuccessiveVolumeFailures(TestDataNodeVolumeFailureReporting.java:189) > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12792) RBF: Test Router-based federation using HDFSContract
[ https://issues.apache.org/jira/browse/HDFS-12792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-12792: --- Attachment: HDFS-12792.008.patch > RBF: Test Router-based federation using HDFSContract > > > Key: HDFS-12792 > URL: https://issues.apache.org/jira/browse/HDFS-12792 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Labels: RBF > Attachments: HDFS-12615.000.patch, HDFS-12792.001.patch, > HDFS-12792.002.patch, HDFS-12792.003.patch, HDFS-12792.004.patch, > HDFS-12792.005.patch, HDFS-12792.006.patch, HDFS-12792.007.patch, > HDFS-12792.008.patch > > > Router-based federation should support HDFSContract. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408482#comment-16408482 ] Bharat Viswanadham edited comment on HDFS-13222 at 3/21/18 7:48 PM: Hi [~shv] Yes, we don't need to restart namenode, when ever we change the minBlockSize, as now minBlockSize is passed as parameter during rpccall from balancer. This helps balancer to run with different minBlockSizes in different runs. was (Author: bharatviswa): Hi [~shv] Yes, we don't need to restart namenode, when we change the minBlockSize, as now minBlockSize is passed as parameter during rpccall from balancer. This helps balancer to run with different minBlockSizes in different runs. > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408482#comment-16408482 ] Bharat Viswanadham commented on HDFS-13222: --- Hi [~shv] Yes, we don't need to restart namenode, when we change the minBlockSize, as now minBlockSize is passed as parameter during rpccall from balancer. This helps balancer to run with different minBlockSizes in different runs. > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408468#comment-16408468 ] Wei Yan commented on HDFS-12512: Rebase a new patch [^HDFS-12512.009.patch]. > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Wei Yan >Priority: Major > Labels: RBF > Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, > HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, > HDFS-12512.005.patch, HDFS-12512.006.patch, HDFS-12512.007.patch, > HDFS-12512.008.patch, HDFS-12512.009.patch > > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-12512) RBF: Add WebHDFS
[ https://issues.apache.org/jira/browse/HDFS-12512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wei Yan updated HDFS-12512: --- Attachment: HDFS-12512.009.patch > RBF: Add WebHDFS > > > Key: HDFS-12512 > URL: https://issues.apache.org/jira/browse/HDFS-12512 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: fs >Reporter: Íñigo Goiri >Assignee: Wei Yan >Priority: Major > Labels: RBF > Attachments: HDFS-12512.000.patch, HDFS-12512.001.patch, > HDFS-12512.002.patch, HDFS-12512.003.patch, HDFS-12512.004.patch, > HDFS-12512.005.patch, HDFS-12512.006.patch, HDFS-12512.007.patch, > HDFS-12512.008.patch, HDFS-12512.009.patch > > > The Router currently does not support WebHDFS. It needs to implement > something similar to {{NamenodeWebHdfsMethods}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11043) TestWebHdfsTimeouts fails
[ https://issues.apache.org/jira/browse/HDFS-11043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408451#comment-16408451 ] Íñigo Goiri commented on HDFS-11043: +1 on [^HDFS-11043.002.patch]. > TestWebHdfsTimeouts fails > - > > Key: HDFS-11043 > URL: https://issues.apache.org/jira/browse/HDFS-11043 > Project: Hadoop HDFS > Issue Type: Bug > Components: webhdfs >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Chao Sun >Priority: Major > Attachments: HDFS-11043.000.patch, HDFS-11043.001.patch, > HDFS-11043.002.patch, org.apache.hadoop.hdfs.web.TestWebHdfsTimeouts.txt > > > I'm seeing reproducible test failures for TestWebHdfsTimeouts locally, at > least on trunk. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408443#comment-16408443 ] Arpit Agarwal commented on HDFS-13314: -- Thanks [~szetszwo]. The v4 patch removes savedImage and addresses Jenkins failures. bq. Question: why using ExitUtil.terminate(-1) but not thrown an IOException? I want to guarantee process exit. Don't want the exception to be swallowed up the call stack. > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. > This behavior is controlled via an undocumented configuration setting, and > disabled by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13314) NameNode should optionally exit if it detects FsImage corruption
[ https://issues.apache.org/jira/browse/HDFS-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Arpit Agarwal updated HDFS-13314: - Attachment: HDFS-13314.04.patch > NameNode should optionally exit if it detects FsImage corruption > > > Key: HDFS-13314 > URL: https://issues.apache.org/jira/browse/HDFS-13314 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Arpit Agarwal >Assignee: Arpit Agarwal >Priority: Major > Attachments: HDFS-13314.01.patch, HDFS-13314.02.patch, > HDFS-13314.03.patch, HDFS-13314.04.patch > > > The NameNode should optionally exit after writing an FsImage if it detects > the following kinds of corruptions: > # INodeReference pointing to non-existent INode > # Duplicate entries in snapshot deleted diff list. > This behavior is controlled via an undocumented configuration setting, and > disabled by default. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar updated HDFS-13300: --- Attachment: HDFS-13300-HDFS-7240.002.patch > Ozone: Remove DatanodeID dependency from HDSL and Ozone > > > Key: HDFS-13300 > URL: https://issues.apache.org/jira/browse/HDFS-13300 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13300-HDFS-7240.000.patch, > HDFS-13300-HDFS-7240.001.patch, HDFS-13300-HDFS-7240.002.patch > > > DatanodeID has been modified to add HDSL/Ozone related information > previously. This jira is to remove DatanodeID dependency from HDSL/Ozone to > make it truly pluggable without having the need to modify DatanodeID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408434#comment-16408434 ] Nanda kumar commented on HDFS-13300: Thanks for the review, [~elek]. {quote}I am not sure if we need infoPort. As I remember it is used for DatanodeHttp server which is not required for hdsl/ozone any more. {quote} We actually don't need this port, but to remove this we need to modify KSM and OzoneClient code (because of ServiceDiscovery API). Created HDFS-13324 to track this. {quote}I am happy with this change but please keep the two plugin names consistent. {quote} Sure, created HDFS-13325 to track this. {quote}I can't see how the race condition between hdsl/object store services(=plugins) are handled. {quote} There is no race condition here. Plugins are loaded in the same order as they are specified in the property, so HdslDatanodeService will be loaded first; this will create DatanodeDetails instance. When ObjectStoreRestPlugin is loaded, it will get DatanodeDetails instance from HdslDatanodeService and update OzoneRestPort. {{SCMNodeManager.hadleHeartbeat}} doesn't have any idea about the ports used by datanode (HdslDatanodeService & ObjectStoreRestPlugin). {quote}I think this is the reason behind the failing REST related unit tests (didn't check, just my guess). {quote} The reason behind test failures is because of a bug in {{MiniOzoneClassicCluster}} change. It has been fixed in patch v002 Change: MiniOzoneClassicCluster - line:124 : It was {{conf.setStrings...}} which should be {{dnConf.setStrings...}} {quote}I am not sure if we need to persist the DatanodeDetails. I think it's enogh to persiste the UUID. {quote} True, we just need the UUID. The reason for storing DatanodeDetails is because we got protobuf of DatanodeDetails which makes it easy to persist :) {quote}If I understood well, now it is true, as all the ports are updated after reading the datanode descriptor from the file. {quote} Exactly. {quote}This is a smal one, but some javadoc still use the "Datanode ID" expression which could be confusing {quote} Thanks for the catch, fixed it in patch v002 > Ozone: Remove DatanodeID dependency from HDSL and Ozone > > > Key: HDFS-13300 > URL: https://issues.apache.org/jira/browse/HDFS-13300 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13300-HDFS-7240.000.patch, > HDFS-13300-HDFS-7240.001.patch, HDFS-13300-HDFS-7240.002.patch > > > DatanodeID has been modified to add HDSL/Ozone related information > previously. This jira is to remove DatanodeID dependency from HDSL/Ozone to > make it truly pluggable without having the need to modify DatanodeID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13325) Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService
Nanda kumar created HDFS-13325: -- Summary: Ozone: Rename ObjectStoreRestPlugin to OzoneDatanodeService Key: HDFS-13325 URL: https://issues.apache.org/jira/browse/HDFS-13325 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Nanda kumar Assignee: Nanda kumar Based on [this| https://issues.apache.org/jira/browse/HDFS-13300?focusedCommentId=16408201=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16408201] comment, we can rename {{ObjectStoreRestPlugin}} to {{OzoneDatanodeService}}. So that the plugin name will be consistant with {{HdslDatanodeService}}. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13324) Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails
Nanda kumar created HDFS-13324: -- Summary: Ozone: Remove InfoPort and InfoSecurePort from DatanodeDetails Key: HDFS-13324 URL: https://issues.apache.org/jira/browse/HDFS-13324 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Nanda kumar Assignee: Nanda kumar We have removed the dependency of DatanodeID in HDSL/Ozone and there is no need for InfoPort and InfoSecurePort. It is now safe to remove InfoPort and InfoSecurePort from DatanodeDetails. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13250) RBF: Router to manage requests across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408387#comment-16408387 ] Íñigo Goiri commented on HDFS-13250: Apparently, branch-2 doesn't have {{HdfsFileStatus#isDirectory()}} (or {{isFile()}}) so I'm committing an addendum [^HDFS-13250.000-addendum-branch-2.patch] for this fix to branch-2 and branch-2.9 > RBF: Router to manage requests across multiple subclusters > -- > > Key: HDFS-13250 > URL: https://issues.apache.org/jira/browse/HDFS-13250 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13250.000-addendum-branch-2.patch, > HDFS-13250.000.patch, HDFS-13250.001.patch, HDFS-13250.002.patch, > HDFS-13250.003.patch, HDFS-13250.004.patch, HDFS-13250.005.patch > > > HDFS-13124 introduces the concept of mount points spanning multiple > subclusters. The Router should distribute the requests across these > subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13250) RBF: Router to manage requests across multiple subclusters
[ https://issues.apache.org/jira/browse/HDFS-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-13250: --- Attachment: HDFS-13250.000-addendum-branch-2.patch > RBF: Router to manage requests across multiple subclusters > -- > > Key: HDFS-13250 > URL: https://issues.apache.org/jira/browse/HDFS-13250 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Major > Fix For: 2.10.0, 2.9.1, 3.0.2, 3.2.0 > > Attachments: HDFS-13250.000-addendum-branch-2.patch, > HDFS-13250.000.patch, HDFS-13250.001.patch, HDFS-13250.002.patch, > HDFS-13250.003.patch, HDFS-13250.004.patch, HDFS-13250.005.patch > > > HDFS-13124 introduces the concept of mount points spanning multiple > subclusters. The Router should distribute the requests across these > subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13222) Update getBlocks method to take minBlockSize in RPC calls
[ https://issues.apache.org/jira/browse/HDFS-13222?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408386#comment-16408386 ] Konstantin Shvachko commented on HDFS-13222: Do I understand correctly this jira enables changing {{minBlockSize}} for {{getBlocks()}} through Balancer live, and eliminates the need to restart NameNode for that? If so we should probably update the release notes to make it clear, and I think it would make sense to push it beyond 3.1-only into earlier branches. All the way to 2.7? > Update getBlocks method to take minBlockSize in RPC calls > - > > Key: HDFS-13222 > URL: https://issues.apache.org/jira/browse/HDFS-13222 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover >Reporter: Bharat Viswanadham >Assignee: Bharat Viswanadham >Priority: Major > Fix For: 3.1.0 > > Attachments: HDFS-13222.00.patch, HDFS-13222.01.patch, > HDFS-13222.02.patch > > > > getBlocks Using balancer parameter is done in this Jira HDFS-9412 > > Pass the Balancer conf value from Balancer to NN via getBlocks in each RPC. > as [~szetszwo] suggested > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal
[ https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408379#comment-16408379 ] genericqa commented on HDFS-13279: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 13s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} compile {color} | {color:red} 7m 8s{color} | {color:red} root in trunk failed. {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 43s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 3s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 31s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} javac {color} | {color:red} 15m 31s{color} | {color:red} root generated 187 new + 1085 unchanged - 0 fixed = 1272 total (was 1085) {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 2m 47s{color} | {color:orange} root: The patch generated 13 new + 150 unchanged - 0 fixed = 163 total (was 150) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 5s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 8m 59s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 37s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 7s{color} | {color:red} hadoop-common in the patch failed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}113m 49s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 30s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}204m 25s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.util.TestDiskChecker | | | hadoop.util.TestReadWriteDiskValidator | | | hadoop.hdfs.server.blockmanagement.TestBlockPlacementPolicyWithSmallRack | | | hadoop.hdfs.server.namenode.ha.TestRetryCacheWithHA | | | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks | | | hadoop.hdfs.web.TestWebHdfsTimeouts | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13279 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915499/HDFS-13279.002.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 8ba343b17712 4.4.0-89-generic
[jira] [Created] (HDFS-13323) Ozone: freon should not retry creating keys immediately after chilli mode failures
Xiaoyu Yao created HDFS-13323: - Summary: Ozone: freon should not retry creating keys immediately after chilli mode failures Key: HDFS-13323 URL: https://issues.apache.org/jira/browse/HDFS-13323 Project: Hadoop HDFS Issue Type: Sub-task Reporter: Xiaoyu Yao Assignee: Xiaoyu Yao I've seen may create key failures immediately after spin up the docker based ozone cluster. The error stack does not reveal this is caused by chili mode (SCM log has it). freon could handle chili mode better without too many create key retry failures in a short period of time. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13317) Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is enabled.
[ https://issues.apache.org/jira/browse/HDFS-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoyu Yao updated HDFS-13317: -- Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: HDFS-7240 Status: Resolved (was: Patch Available) Thanks, [~elek] for the review. I've committed the fix to the feature branch. > Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is > enabled. > --- > > Key: HDFS-13317 > URL: https://issues.apache.org/jira/browse/HDFS-13317 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13317-HDFS-7240.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12977) Add stateId to RPC headers.
[ https://issues.apache.org/jira/browse/HDFS-12977?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408367#comment-16408367 ] Plamen Jeliazkov commented on HDFS-12977: - Thanks you [~shv]. Thanks you [~vagarychen]. I will open up the client-side focused JIRA after some discussion. > Add stateId to RPC headers. > --- > > Key: HDFS-12977 > URL: https://issues.apache.org/jira/browse/HDFS-12977 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ipc, namenode >Reporter: Konstantin Shvachko >Assignee: Plamen Jeliazkov >Priority: Major > Fix For: HDFS-12943 > > Attachments: HDFS_12977.trunk.001.patch, HDFS_12977.trunk.002.patch, > HDFS_12977.trunk.003.patch, HDFS_12977.trunk.004.patch, > HDFS_12977.trunk.005.patch, HDFS_12977.trunk.006.patch, > HDFS_12977.trunk.007.patch, HDFS_12977.trunk.008.patch > > > stateId is a new field in the RPC headers of NameNode proto calls. > stateId is the journal transaction Id, which represents LastSeenId for the > clients and LastWrittenId for NameNodes. See more in [reads from Standby > design > doc|https://issues.apache.org/jira/secure/attachment/12902925/ConsistentReadsFromStandbyNode.pdf]. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13055) Aggregate usage statistics from datanodes
[ https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408332#comment-16408332 ] Ajay Kumar commented on HDFS-13055: --- [~bharatviswa], thanks for review. addressed your comments in v10. > Aggregate usage statistics from datanodes > - > > Key: HDFS-13055 > URL: https://issues.apache.org/jira/browse/HDFS-13055 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13055.001.patch, HDFS-13055.002.patch, > HDFS-13055.003.patch, HDFS-13055.004.patch, HDFS-13055.005.patch, > HDFS-13055.006.patch, HDFS-13055.007.patch, HDFS-13055.008.patch, > HDFS-13055.009.patch, HDFS-13055.010.patch > > > We collect variety of statistics in DataNodes and expose them via JMX. > Aggregating some of the high level statistics which we are already collecting > in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable > time window will create a central repository accessible via JMX and UI. > Breaking this into 3 parts as suggested by [~arpitagarwal] > # Generate usage report on DN. (will handle in this jira) > # Pass usage report to NN periodically. > # Process report on NN and expose stats via JMX. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13055) Aggregate usage statistics from datanodes
[ https://issues.apache.org/jira/browse/HDFS-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ajay Kumar updated HDFS-13055: -- Attachment: HDFS-13055.010.patch > Aggregate usage statistics from datanodes > - > > Key: HDFS-13055 > URL: https://issues.apache.org/jira/browse/HDFS-13055 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Ajay Kumar >Assignee: Ajay Kumar >Priority: Major > Attachments: HDFS-13055.001.patch, HDFS-13055.002.patch, > HDFS-13055.003.patch, HDFS-13055.004.patch, HDFS-13055.005.patch, > HDFS-13055.006.patch, HDFS-13055.007.patch, HDFS-13055.008.patch, > HDFS-13055.009.patch, HDFS-13055.010.patch > > > We collect variety of statistics in DataNodes and expose them via JMX. > Aggregating some of the high level statistics which we are already collecting > in {{DataNodeMetrics}} (like bytesRead,bytesWritten etc) over a configurable > time window will create a central repository accessible via JMX and UI. > Breaking this into 3 parts as suggested by [~arpitagarwal] > # Generate usage report on DN. (will handle in this jira) > # Pass usage report to NN periodically. > # Process report on NN and expose stats via JMX. > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408201#comment-16408201 ] Elek, Marton commented on HDFS-13300: - Thanks to work on this [~nandakumar131]. I am very happy to see that the hdsl/ozone became more independent from datanode/hdfs. Overall the patch looks good, I have some comments: 1. I am not sure if we need infoPort. As I remember it is used for DatanodeHttp server which is not required for hdsl/ozone any more. (We need an addition server in hdsl to help the container replication which could be added to the descriptor but it can be part of HDFS-11686) 2. You renamed HdslServerPlugin to HdslDatanodeService. I am happy with this change but please keep the two plugin names consistent: in this case please rename ObjectStoreRestPlugin to ObjectStoreDatanodeService (or OzoneDatanodeService). As the two classes are very similar I prefer to use similar names (Especially as the names are used in the configuration). 3. I can't see how the race condition between hdsl/object store services(=plugins) are handled. As I see the ObjectStoreRestPlugin updates the ozoneRestPort in the one DatanodeDetails which is part of the scm--datanode. heartbeat. But I think the SCMNodeManager.hadleHeartbeat should contain some logic to update the ports from the heartbeat (I think it's used from the discovery). I think this is the reason behind the failing REST related unit tests (didn't check, just my guess). 4. I am not sure if we need to persist the DatanodeDetails. I think it's enogh to persiste the UUID. The ports could be changed (and in containerized environment host/ip also could be changed in case of moving a container to a different host). But I understand this was not changed by this patch. I think the whole logic should work even if I modify any of the ports. If I understood well, now it is true, as all the ports are updated after reading the datanode descriptor from the file. 5. This is a smal one, but some javadoc still use the "Datanode ID" expression which could be confusing (StorageContainerDatanodeProtocol, SCMTestMock) > Ozone: Remove DatanodeID dependency from HDSL and Ozone > > > Key: HDFS-13300 > URL: https://issues.apache.org/jira/browse/HDFS-13300 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Nanda kumar >Assignee: Nanda kumar >Priority: Major > Attachments: HDFS-13300-HDFS-7240.000.patch, > HDFS-13300-HDFS-7240.001.patch > > > DatanodeID has been modified to add HDSL/Ozone related information > previously. This jira is to remove DatanodeID dependency from HDSL/Ozone to > make it truly pluggable without having the need to modify DatanodeID. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13291) RBF: Implement available space based OrderResolver
[ https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408194#comment-16408194 ] Íñigo Goiri commented on HDFS-13291: Thanks [~linyiqun] for [^HDFS-13291.003.patch]. * I would add a test case to TestRouterAllResolver. * In AvailableSpaceResolver there is split of lines earlier than 80 in lines 54, 65, and 129 (it's minor but looks unnecesary). * We could add the corner cases for {{verifyRank()}}: 1, 0.5, 0.0, and BALANCER_PREFERENCE_DEFAULT. Not sure is worth chekcing the illegal cases (e.g., -1, 2). As a general comment, LocalResolver and RandomResolver are very similar. It might be good to refactor a little and extract the shared implementation. > RBF: Implement available space based OrderResolver > -- > > Key: HDFS-13291 > URL: https://issues.apache.org/jira/browse/HDFS-13291 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13291.001.patch, HDFS-13291.002.patch, > HDFS-13291.003.patch > > > Implement available space based OrderResolver, this type resolver will > benefit for balancing the data across subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ekanth S reassigned HDFS-13318: --- Assignee: Ekanth S (was: Íñigo Goiri) > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ekanth S >Priority: Minor > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13318) RBF: Fix FindBugs in hadoop-hdfs-rbf
[ https://issues.apache.org/jira/browse/HDFS-13318?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408184#comment-16408184 ] Ekanth S commented on HDFS-13318: - I'll give this a shot. > RBF: Fix FindBugs in hadoop-hdfs-rbf > > > Key: HDFS-13318 > URL: https://issues.apache.org/jira/browse/HDFS-13318 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Íñigo Goiri >Priority: Minor > > hadoop-hdfs-rbf has 3 FindBug warnings: > * NamenodePriorityComparator should be serializable > * RemoteMethod.getTypes() may expose internal representation > * RemoteMethod may store mutable objects -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13195) DataNode conf page cannot display the current value after reconfig
[ https://issues.apache.org/jira/browse/HDFS-13195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408167#comment-16408167 ] Íñigo Goiri commented on HDFS-13195: [~kihwal] thoughts on [^HDFS-13195.002.patch] and [^HDFS-13195-branch-2.7.002.patch]? > DataNode conf page cannot display the current value after reconfig > --- > > Key: HDFS-13195 > URL: https://issues.apache.org/jira/browse/HDFS-13195 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Affects Versions: 2.7.1 >Reporter: maobaolong >Assignee: maobaolong >Priority: Minor > Fix For: 2.7.1 > > Attachments: HDFS-13195-branch-2.7.001.patch, > HDFS-13195-branch-2.7.002.patch, HDFS-13195.001.patch, HDFS-13195.002.patch > > > Now the branch-2.7 support dfs.datanode.data.dir reconfig, but after i > reconfig this key, the conf page's value is still the old config value. > The reason is that: > {code:java} > public DatanodeHttpServer(final Configuration conf, > final DataNode datanode, > final ServerSocketChannel externalHttpChannel) > throws IOException { > this.conf = conf; > Configuration confForInfoServer = new Configuration(conf); > confForInfoServer.setInt(HttpServer2.HTTP_MAX_THREADS, 10); > HttpServer2.Builder builder = new HttpServer2.Builder() > .setName("datanode") > .setConf(confForInfoServer) > .setACL(new AccessControlList(conf.get(DFS_ADMIN, " "))) > .hostName(getHostnameForSpnegoPrincipal(confForInfoServer)) > .addEndpoint(URI.create("http://localhost:0;)) > .setFindPort(true); > this.infoServer = builder.build(); > {code} > The confForInfoServer is a new configuration instance, while the dfsadmin > reconfig the datanode's config, the config result cannot reflect to > confForInfoServer, so we should use the datanode's conf. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13317) Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is enabled.
[ https://issues.apache.org/jira/browse/HDFS-13317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408060#comment-16408060 ] Elek, Marton commented on HDFS-13317: - +1. LGTM as of now. Should be changed after HADOOP-15257 as I would like to provide compose files for the vanilla hdfs clusters as well. But it could be fixed when we will be there... > Ozone: docker-compose should only be copied to hadoop-dist if Phdsl is > enabled. > --- > > Key: HDFS-13317 > URL: https://issues.apache.org/jira/browse/HDFS-13317 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xiaoyu Yao >Assignee: Xiaoyu Yao >Priority: Major > Attachments: HDFS-13317-HDFS-7240.001.patch > > -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal
[ https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tao Jie updated HDFS-13279: --- Attachment: HDFS-13279.002.patch > Datanodes usage is imbalanced if number of nodes per rack is not equal > -- > > Key: HDFS-13279 > URL: https://issues.apache.org/jira/browse/HDFS-13279 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.3, 3.0.0 >Reporter: Tao Jie >Priority: Major > Attachments: HDFS-13279.001.patch, HDFS-13279.002.patch > > > In a Hadoop cluster, number of nodes on a rack could be different. For > example, we have 50 Datanodes in all and 15 datanodes per rack, it would > remain 5 nodes on the last rack. In this situation, we find that storage > usage on the last 5 nodes would be much higher than other nodes. > With the default blockplacement policy, for each block, the first > replication has the same probability to write to each datanode, but the > probability for the 2nd/3rd replication to write to the last 5 nodes would > much higher than to other nodes. > Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block > would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep > is on rack1(15 reps) would send equally to other 35 nodes and each nodes > receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on > rack4(5 nodes) would receive 1.29 replications in all, while other node would > receive 0.97 reps. > ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)|| > |From rack1|-|15/35=0.43|0.43|0.43| > |From rack2|0.43|-|0.43|0.43| > |From rack3|0.43|0.43|-|0.43| > |From rack4|5/45=0.11|0.11|0.11|-| > |Total|0.97|0.97|0.97|1.29| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16408015#comment-16408015 ] Elek, Marton commented on HDFS-13319: - +1/LGTM. You are right, HADOOP_OZONE_HOME was not defined. I tested the change locally and it worked well. > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13322) fuse dfs - uid persists when switching between ticket caches
Alex Volskiy created HDFS-13322: --- Summary: fuse dfs - uid persists when switching between ticket caches Key: HDFS-13322 URL: https://issues.apache.org/jira/browse/HDFS-13322 Project: Hadoop HDFS Issue Type: Bug Components: fuse-dfs Affects Versions: 2.6.0 Environment: Linux xx.xx.xx.xxx 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux Reporter: Alex Volskiy The symptoms of this issue are the same as described in HDFS-3608 except the workaround that was applied (detect changes in UID ticket cache) doesn't resolve the issue when multiple ticket caches are in use by the same user. Our use case requires that a job scheduler running as a specific uid obtain separate kerberos sessions per job and that each of these sessions use a separate cache. When switching sessions this way, no change is made to the original ticket cache so the cached filesystem instance doesn't get regenerated. {{$ export KRB5CCNAME=/tmp/krb5cc_session1}} {{$ kinit user_a@domain}} {{$ touch /fuse_mount/tmp/testfile1}} {{$ ls -l /fuse_mount/tmp/testfile1}} {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile1*}} {{$ export KRB5CCNAME=/tmp/krb5cc_session2}} {{$ kinit user_b@domain}} {{$ touch /fuse_mount/tmp/testfile2}} {{$ ls -l /fuse_mount/tmp/testfile2}} {{ *-rwxrwxr-x 1 user_a user_a 0 Mar 21 13:37 /fuse_mount/tmp/testfile2*}} {{ }}{color:#d04437}*{{** expected owner to be user_b **}}*{color} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13321) Inadequate information for handling catch clauses
[ https://issues.apache.org/jira/browse/HDFS-13321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zhenhao Li updated HDFS-13321: -- Description: Their are some situations that different exception types are caught, but the handling of those exceptions can not show the differences of those types. Here are the code snippets we found which have this problem: *hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java* [https://github.com/apache/hadoop/blob/bec79ca2495abdc347d64628151c90f5ce777046/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java] At Line *233* and Line *235.* We can see that two exception types are caught, but the logging statements here can not show the exception type at all. Also there are comments for these two catch clauses respectively, one is "{color:#707070}// namenode is busy{color}", the other one is "{color:#707070}// namenode is not available",{color:#33} but the log messages are too generic, can not show the "busy" or "not available" of namenode.{color}{color} It may cause confusions to the person who is reading the log, the person can not know what exception happened here and can not distinguish logs generated by these two statements. Maybe adding stack trace information to these two logging statements and change the log message to handle specific situations is a simple way to improve it. was: Their are some situations that different exception types are caught, but the handling of those exceptions can not show the differences of those types. Here are the code snippet we found which have this problem: *hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java* [https://github.com/apache/hadoop/blob/bec79ca2495abdc347d64628151c90f5ce777046/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java] At Line *233* and Line *235.* We can see that two exception types are caught, but the logging statements here can not show the exception type at all. Also there are comments for these two catch clauses respectively, one is "{color:#707070}// namenode is busy{color}", the other one is "{color:#707070}// namenode is not available", {color:#33}but the log messages are too generic, can not show the "busy" or "not available" of namenode.{color}{color} It may cause confusions to the person who are reading the log, the person can not know what exception happened here and can not distinguish logs generated by these two statements. Maybe adding stack trace information to these two logging statements and change the log message to handle specific situations is a simple way to improve it. > Inadequate information for handling catch clauses > - > > Key: HDFS-13321 > URL: https://issues.apache.org/jira/browse/HDFS-13321 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 3.0.0 >Reporter: Zhenhao Li >Priority: Major > Labels: easyfix > > Their are some situations that different exception types are caught, but the > handling of those exceptions can not show the differences of those types. > Here are the code snippets we found which have this problem: > *hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java* > [https://github.com/apache/hadoop/blob/bec79ca2495abdc347d64628151c90f5ce777046/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java] > At Line *233* and Line *235.* We can see that two exception types are caught, > but the logging statements here can not show the exception type at all. > Also there are comments for these two catch clauses respectively, one is > "{color:#707070}// namenode is busy{color}", the other one is > "{color:#707070}// namenode is not available",{color:#33} but the log > messages are too generic, can not show the "busy" or "not available" of > namenode.{color}{color} > It may cause confusions to the person who is reading the log, the person can > not know what exception happened here and can not distinguish logs generated > by these two statements. > Maybe adding stack trace information to these two logging statements and > change the log message to handle specific situations is a simple way to > improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-13321) Inadequate information for handling catch clauses
Zhenhao Li created HDFS-13321: - Summary: Inadequate information for handling catch clauses Key: HDFS-13321 URL: https://issues.apache.org/jira/browse/HDFS-13321 Project: Hadoop HDFS Issue Type: Improvement Components: datanode Affects Versions: 3.0.0 Reporter: Zhenhao Li Their are some situations that different exception types are caught, but the handling of those exceptions can not show the differences of those types. Here are the code snippet we found which have this problem: *hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java* [https://github.com/apache/hadoop/blob/bec79ca2495abdc347d64628151c90f5ce777046/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BPServiceActor.java] At Line *233* and Line *235.* We can see that two exception types are caught, but the logging statements here can not show the exception type at all. Also there are comments for these two catch clauses respectively, one is "{color:#707070}// namenode is busy{color}", the other one is "{color:#707070}// namenode is not available", {color:#33}but the log messages are too generic, can not show the "busy" or "not available" of namenode.{color}{color} It may cause confusions to the person who are reading the log, the person can not know what exception happened here and can not distinguish logs generated by these two statements. Maybe adding stack trace information to these two logging statements and change the log message to handle specific situations is a simple way to improve it. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13300) Ozone: Remove DatanodeID dependency from HDSL and Ozone
[ https://issues.apache.org/jira/browse/HDFS-13300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407883#comment-16407883 ] genericqa commented on HDFS-13300: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 44 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 1m 35s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 59s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 57s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 7m 35s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 57s{color} | {color:red} hadoop-hdsl/common in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} hadoop-hdsl/container-service in HDFS-7240 has 66 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 52s{color} | {color:red} hadoop-ozone/common in HDFS-7240 has 2 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 37s{color} | {color:red} hadoop-ozone/objectstore-service in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 42s{color} | {color:red} hadoop-ozone/ozone-manager in HDFS-7240 has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 6m 15s{color} | {color:green} HDFS-7240 passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 15s{color} | {color:red} client in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 15m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 6m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 0s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 32s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 2s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 27s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-dist hadoop-ozone/integration-test {color} | |
[jira] [Commented] (HDFS-13291) RBF: Implement available space based OrderResolver
[ https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407847#comment-16407847 ] genericqa commented on HDFS-13291: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 48s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 39s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf in trunk has 3 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 13s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 40s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 43s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 3 unchanged - 0 fixed = 4 total (was 3) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 22s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 46m 47s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf | | | org.apache.hadoop.hdfs.server.federation.resolver.order.AvailableSpaceResolver$SubclusterSpaceComparator implements Comparator but not Serializable At AvailableSpaceResolver.java:Serializable At AvailableSpaceResolver.java:[lines 215-232] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13291 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915470/HDFS-13291.003.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle cc | | uname | Linux 25a4272b612f 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6c63cc7 | | maven | version: Apache
[jira] [Commented] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407822#comment-16407822 ] Mukul Kumar Singh commented on HDFS-13319: -- The issue happens because of use of HADOOP_OZONE_HOME script in the test. HADOOP_HDFS_HOME should be used in its place. [~elek][~anu] Please have a look at the patch. > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13320) Ozone:Support for MicrobenchMarking Tool
[ https://issues.apache.org/jira/browse/HDFS-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407819#comment-16407819 ] genericqa commented on HDFS-13320: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 49s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 7 new or modified test files. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 5m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 1s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 53s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 11s{color} | {color:green} HDFS-7240 passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 32s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 24s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 30s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 22s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 57s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue} 0m 0s{color} | {color:blue} Skipped patched modules with no Java source: hadoop-project hadoop-ozone/integration-test {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 23s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 18s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 0m 17s{color} | {color:red} tools in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} server-scm in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s{color} | {color:red} common in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 21s{color} | {color:red} integration-test in HDFS-7240 failed. {color} | | {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 25s{color} | {color:red} tools in HDFS-7240 failed. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 18s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 10s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s{color} | {color:red} common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 11s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvninstall {color} | {color:red} 0m 12s{color} | {color:red} tools in the patch failed. {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 12m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 2m 34s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s{color} | {color:red} server-scm in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 18s{color} | {color:red} common in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 0m 20s{color} | {color:red} integration-test in the patch failed. {color} | | {color:red}-1{color} | {color:red} mvnsite {color} |
[jira] [Commented] (HDFS-13204) RBF: Optimize name service safe mode icon
[ https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407812#comment-16407812 ] genericqa commented on HDFS-13204: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 23s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 24s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 27m 49s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 9s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 11m 48s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 40m 56s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:d4cc50f | | JIRA Issue | HDFS-13204 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915465/HDFS-13204.004.patch | | Optional Tests | asflicense shadedclient | | uname | Linux e37f75966abf 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 19:09:19 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 6c63cc7 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 323 (vs. ulimit of 1) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23610/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > RBF: Optimize name service safe mode icon > - > > Key: HDFS-13204 > URL: https://issues.apache.org/jira/browse/HDFS-13204 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Minor > Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, > HDFS-13204.003.patch, HDFS-13204.004.patch, Routers.png, Subclusters.png, > image-2018-02-28-18-33-09-972.png, image-2018-02-28-18-33-47-661.png, > image-2018-02-28-18-35-35-708.png > > > In federation health webpage, the safe mode icons of Subclusters and Routers > are inconsistent. > The safe mode icon of Subclusters may induce users the name service is > maintaining. > !image-2018-02-28-18-33-09-972.png! > The safe mode icon of Routers: > !image-2018-02-28-18-33-47-661.png! > In fact, if the name service is in safe mode, users can't do writing related > operations. So I think the safe mode icon in Subclusters should be modified, > which may be more reasonable. > !image-2018-02-28-18-35-35-708.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407786#comment-16407786 ] genericqa commented on HDFS-13319: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s{color} | {color:red} The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch. {color} | || || || || {color:brown} HDFS-7240 Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 0s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} HDFS-7240 passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 9m 0s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 23s{color} | {color:green} There were no new shellcheck issues. {color} | | {color:green}+1{color} | {color:green} shelldocs {color} | {color:green} 0m 12s{color} | {color:green} There were no new shelldocs issues. {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 10m 36s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s{color} | {color:green} common in the patch passed. {color} | | {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 22s{color} | {color:red} The patch generated 10 ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 39m 35s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:215a942 | | JIRA Issue | HDFS-13319 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915462/HDFS-13319-HDFS-7240.001.patch | | Optional Tests | asflicense mvnsite unit shellcheck shelldocs | | uname | Linux 47cbbc476e89 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-7240 / 7ace05b | | maven | version: Apache Maven 3.3.9 | | shellcheck | v0.4.6 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/23609/testReport/ | | asflicense | https://builds.apache.org/job/PreCommit-HDFS-Build/23609/artifact/out/patch-asflicense-problems.txt | | Max. process+thread count | 410 (vs. ulimit of 1) | | modules | C: hadoop-ozone/common U: hadoop-ozone/common | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23609/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005)
[jira] [Comment Edited] (HDFS-13291) RBF: Implement available space based OrderResolver
[ https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407785#comment-16407785 ] Yiqun Lin edited comment on HDFS-13291 at 3/21/18 11:57 AM: Thanks for the review, [~elgoiri] . Attach the updated patch. was (Author: linyiqun): Thanks for the review, @[~elgoiri] . Attach the updated patch. > RBF: Implement available space based OrderResolver > -- > > Key: HDFS-13291 > URL: https://issues.apache.org/jira/browse/HDFS-13291 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13291.001.patch, HDFS-13291.002.patch, > HDFS-13291.003.patch > > > Implement available space based OrderResolver, this type resolver will > benefit for balancing the data across subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13291) RBF: Implement available space based OrderResolver
[ https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407785#comment-16407785 ] Yiqun Lin commented on HDFS-13291: -- Thanks for the review, @[~elgoiri] . Attach the updated patch. > RBF: Implement available space based OrderResolver > -- > > Key: HDFS-13291 > URL: https://issues.apache.org/jira/browse/HDFS-13291 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13291.001.patch, HDFS-13291.002.patch, > HDFS-13291.003.patch > > > Implement available space based OrderResolver, this type resolver will > benefit for balancing the data across subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13291) RBF: Implement available space based OrderResolver
[ https://issues.apache.org/jira/browse/HDFS-13291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yiqun Lin updated HDFS-13291: - Attachment: HDFS-13291.003.patch > RBF: Implement available space based OrderResolver > -- > > Key: HDFS-13291 > URL: https://issues.apache.org/jira/browse/HDFS-13291 > Project: Hadoop HDFS > Issue Type: Sub-task >Affects Versions: 3.0.0 >Reporter: Yiqun Lin >Assignee: Yiqun Lin >Priority: Major > Attachments: HDFS-13291.001.patch, HDFS-13291.002.patch, > HDFS-13291.003.patch > > > Implement available space based OrderResolver, this type resolver will > benefit for balancing the data across subclusters. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13204) RBF: Optimize name service safe mode icon
[ https://issues.apache.org/jira/browse/HDFS-13204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] liuhongtong updated HDFS-13204: --- Attachment: HDFS-13204.004.patch > RBF: Optimize name service safe mode icon > - > > Key: HDFS-13204 > URL: https://issues.apache.org/jira/browse/HDFS-13204 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: liuhongtong >Assignee: liuhongtong >Priority: Minor > Attachments: HDFS-13204.001.patch, HDFS-13204.002.patch, > HDFS-13204.003.patch, HDFS-13204.004.patch, Routers.png, Subclusters.png, > image-2018-02-28-18-33-09-972.png, image-2018-02-28-18-33-47-661.png, > image-2018-02-28-18-35-35-708.png > > > In federation health webpage, the safe mode icons of Subclusters and Routers > are inconsistent. > The safe mode icon of Subclusters may induce users the name service is > maintaining. > !image-2018-02-28-18-33-09-972.png! > The safe mode icon of Routers: > !image-2018-02-28-18-33-47-661.png! > In fact, if the name service is in safe mode, users can't do writing related > operations. So I think the safe mode icon in Subclusters should be modified, > which may be more reasonable. > !image-2018-02-28-18-35-35-708.png! -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-11991) Ozone: Ozone shell: the root is assumed to hdfs
[ https://issues.apache.org/jira/browse/HDFS-11991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Nanda kumar reassigned HDFS-11991: -- Assignee: Shashikant Banerjee (was: Nanda kumar) > Ozone: Ozone shell: the root is assumed to hdfs > --- > > Key: HDFS-11991 > URL: https://issues.apache.org/jira/browse/HDFS-11991 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Reporter: Anu Engineer >Assignee: Shashikant Banerjee >Priority: Major > Labels: OzonePostMerge > Fix For: HDFS-7240 > > > *hdfs oz* command, or ozone shell has a command like option to run some > commands as root easily by specifying _--root_ as a command line option. > But after HDFS-11655 that assumption is no longer true. We need to detect the > user that started the scm/ksm service and _root_ should be mapped to that > user. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13279) Datanodes usage is imbalanced if number of nodes per rack is not equal
[ https://issues.apache.org/jira/browse/HDFS-13279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16407762#comment-16407762 ] genericqa commented on HDFS-13279: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s{color} | {color:blue} Docker mode activated. {color} | | {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 5s{color} | {color:red} HDFS-13279 does not apply to trunk. Rebase required? Wrong Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} | \\ \\ || Subsystem || Report/Notes || | JIRA Issue | HDFS-13279 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12915451/HDFS-13279.001.patch | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/23608/console | | Powered by | Apache Yetus 0.8.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Datanodes usage is imbalanced if number of nodes per rack is not equal > -- > > Key: HDFS-13279 > URL: https://issues.apache.org/jira/browse/HDFS-13279 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.3, 3.0.0 >Reporter: Tao Jie >Priority: Major > Attachments: HDFS-13279.001.patch > > > In a Hadoop cluster, number of nodes on a rack could be different. For > example, we have 50 Datanodes in all and 15 datanodes per rack, it would > remain 5 nodes on the last rack. In this situation, we find that storage > usage on the last 5 nodes would be much higher than other nodes. > With the default blockplacement policy, for each block, the first > replication has the same probability to write to each datanode, but the > probability for the 2nd/3rd replication to write to the last 5 nodes would > much higher than to other nodes. > Consider we write 50 blocks to such 50 datanodes. The first rep of 100 block > would distirbuted to 50 node equally. The 2rd rep of blocks which the 1st rep > is on rack1(15 reps) would send equally to other 35 nodes and each nodes > receive 0.428 rep. So does blocks on rack2 and rack3. As a result, node on > rack4(5 nodes) would receive 1.29 replications in all, while other node would > receive 0.97 reps. > ||-||Rack1(15 nodes)||Rack2(15 nodes)||Rack3(15 nodes)||Rack4(5 nodes)|| > |From rack1|-|15/35=0.43|0.43|0.43| > |From rack2|0.43|-|0.43|0.43| > |From rack3|0.43|0.43|-|0.43| > |From rack4|5/45=0.11|0.11|0.11|-| > |Total|0.97|0.97|0.97|1.29| -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13319: - Status: Patch Available (was: Open) > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-13319) Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path
[ https://issues.apache.org/jira/browse/HDFS-13319?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mukul Kumar Singh updated HDFS-13319: - Attachment: HDFS-13319-HDFS-7240.001.patch > Ozone: start-ozone.sh/stop-ozone.sh fail because of unresolved path > --- > > Key: HDFS-13319 > URL: https://issues.apache.org/jira/browse/HDFS-13319 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: ozone >Affects Versions: HDFS-7240 >Reporter: Mukul Kumar Singh >Assignee: Mukul Kumar Singh >Priority: Major > Fix For: HDFS-7240 > > Attachments: HDFS-13319-HDFS-7240.001.patch > > > start-ozone.sh/stop-ozone.sh fails because the shell script fails to resolve > the full path. Interestingly, the start-dfs.sh script work as expected. > {code} > /opt/hadoop/hadoop-3.2.0-SNAPSHOT/sbin/start-ozone.sh > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > y129.l42scl.hortonworks.com: bash: /bin/oz: No such file or directory > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org