[jira] [Commented] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717602#comment-15717602 ] Nandakumar commented on HDFS-10206: --- No, nodes of same network path refers to nodes on same rack. Node#getNetworkLocation() will return path to the Node (Node name is not included in the path) for a node "/dc1/rack1/datanode1", Node#getNetworkLocation() will return "/dc1/rack1" > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch, HDFS-10206.001.patch, > HDFS-10206.002.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717543#comment-15717543 ] Ming Ma commented on HDFS-10206: To clarify, "two nodes of the same network path" referred to two identical nodes, just like how getWeight could return 0 in such case. > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch, HDFS-10206.001.patch, > HDFS-10206.002.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy
[ https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717461#comment-15717461 ] Hadoop QA commented on HDFS-11193: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 20s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 134 unchanged - 2 fixed = 134 total (was 136) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 47s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 36s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 76m 11s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 95m 13s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFSStriped | | | hadoop.hdfs.TestRollingUpgrade | | | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport | | | hadoop.hdfs.server.namenode.TestFileTruncate | | | hadoop.hdfs.TestFileChecksum | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841610/HDFS-11193-HDFS-10285-01.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 44b2b0692594 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 39f7a49 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17749/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17749/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17749/console | | Powered by | Apache Yetus 0.4.0
[jira] [Commented] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717398#comment-15717398 ] Nandakumar commented on HDFS-10206: --- If we return 0 for two nodes having the same network path (i.e. in same rack) from getWeightUsingNetworkLocation * getWeightUsingNetworkLocation will return 0 for same rack * getWeight will return 2 for same rack It will be good to have same behavior across these methods. > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch, HDFS-10206.001.patch, > HDFS-10206.002.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10206) getBlockLocations might not sort datanodes properly by distance
[ https://issues.apache.org/jira/browse/HDFS-10206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717370#comment-15717370 ] Ming Ma commented on HDFS-10206: Thanks [~nandakumar131]! The patches look good overall. To make the method more general, seems better to have getWeightUsingNetworkLocation return 0 when two nodes have the same network path. [~daryn] [~kihwal], any concerns about the added 0.1ms latency? Note this only happens for non-datanode reader scenario and it doesn't hold FSNamesystem lock. > getBlockLocations might not sort datanodes properly by distance > --- > > Key: HDFS-10206 > URL: https://issues.apache.org/jira/browse/HDFS-10206 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ming Ma >Assignee: Nandakumar > Attachments: HDFS-10206.000.patch, HDFS-10206.001.patch, > HDFS-10206.002.patch > > > If the DFSClient machine is not a datanode, but it shares its rack with some > datanodes of the HDFS block requested, {{DatanodeManager#sortLocatedBlocks}} > might not put the local-rack datanodes at the beginning of the sorted list. > That is because the function didn't call {{networktopology.add(client);}} to > properly set the node's parent node; something required by > {{networktopology.sortByDistance}} to compute distance between two nodes in > the same topology tree. > Another issue with {{networktopology.sortByDistance}} is it only > distinguishes local rack from remote rack, but it doesn't support general > distance calculation to tell how remote the rack is. > {noformat} > NetworkTopology.java > protected int getWeight(Node reader, Node node) { > // 0 is local, 1 is same rack, 2 is off rack > // Start off by initializing to off rack > int weight = 2; > if (reader != null) { > if (reader.equals(node)) { > weight = 0; > } else if (isOnSameRack(reader, node)) { > weight = 1; > } > } > return weight; > } > {noformat} > HDFS-10203 has suggested moving the sorting from namenode to DFSClient to > address another issue. Regardless of where we do the sorting, we still need > fix the issues outline here. > Note that BlockPlacementPolicyDefault shares the same NetworkTopology object > used by DatanodeManager and requires Nodes stored in the topology to be > {{DatanodeDescriptor}} for block placement. So we need to make sure we don't > pollute the NetworkTopology if we plan to fix it on the server side. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy
[ https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11193: Attachment: HDFS-11193-HDFS-10285-01.patch Attached another patch fixing test case failure and checkstyle issues. > [SPS]: Erasure coded files should be considered for satisfying storage policy > - > > Key: HDFS-11193 > URL: https://issues.apache.org/jira/browse/HDFS-11193 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11193-HDFS-10285-00.patch, > HDFS-11193-HDFS-10285-01.patch > > > Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. > {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider > all immediate files under that directory and need to check that, the files > really matching with namespace storage policy. All the mismatched striped > blocks should be chosen for block movement. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11178) TestAddStripedBlockInFBR#testAddBlockInFullBlockReport fails frequently in trunk
[ https://issues.apache.org/jira/browse/HDFS-11178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717313#comment-15717313 ] Brahma Reddy Battula commented on HDFS-11178: - [~liuml07] do you comments on latest patch..? > TestAddStripedBlockInFBR#testAddBlockInFullBlockReport fails frequently in > trunk > > > Key: HDFS-11178 > URL: https://issues.apache.org/jira/browse/HDFS-11178 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: test >Reporter: Yiqun Lin >Assignee: Yiqun Lin > Attachments: HDFS-11178.001.patch, HDFS-11178.002.patch, > HDFS-11178.003.patch > > > The test {{TestAddStripedBlockInFBR#testAddBlockInFullBlockReport}} fails > easily in trunk. It's easy to reproduce the failure, it fails 2~3 times when > I run the test 4~-5 times in my local. Also it failed in the recent > Jenkins(https://builds.apache.org/job/PreCommit-HDFS-Build/17667/testReport/), > The stack infos: > {code} > java.lang.AssertionError: expected:<9> but was:<7> > at org.junit.Assert.fail(Assert.java:88) > at org.junit.Assert.failNotEquals(Assert.java:743) > at org.junit.Assert.assertEquals(Assert.java:118) > at org.junit.Assert.assertEquals(Assert.java:555) > at org.junit.Assert.assertEquals(Assert.java:542) > at > org.apache.hadoop.hdfs.server.namenode.TestAddStripedBlockInFBR.testAddBlockInFullBlockReport(TestAddStripedBlockInFBR.java:108) > {code} > It's easy to have a fix: Use {{GenericTestUtils.waitFor}} to wait the full > blocks being reported. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11180) Intermittent deadlock in NameNode when failover happens.
[ https://issues.apache.org/jira/browse/HDFS-11180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Akira Ajisaka updated HDFS-11180: - Fix Version/s: 2.7.4 Committed branch-2.7 patch. I'll run full HDFS tests locally with the branch-2.6 patch and then commit it. > Intermittent deadlock in NameNode when failover happens. > > > Key: HDFS-11180 > URL: https://issues.apache.org/jira/browse/HDFS-11180 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 >Reporter: Abhishek Modi >Assignee: Akira Ajisaka >Priority: Blocker > Labels: high-availability > Fix For: 2.8.0, 2.7.4, 3.0.0-alpha2 > > Attachments: HDFS-11180-branch-2.01.patch, > HDFS-11180-branch-2.6.01.patch, HDFS-11180-branch-2.7.01.patch, > HDFS-11180-branch-2.8.01.patch, HDFS-11180.00.patch, HDFS-11180.01.patch, > HDFS-11180.02.patch, HDFS-11180.03.patch, HDFS-11180.04.patch, jstack.log > > > It is happening due to metrics getting updated at the same time when failover > is happening. Please find attached jstack at that point of time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning
[ https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717175#comment-15717175 ] Weiwei Yang commented on HDFS-10581: Hello [~djp] and folks on CC Thanks for looking at this one. I thought this is a minor change to improve user experience and won't cause any problem. Please let me know your concern and problems it would make. Thank You. > Redundant table on Datanodes page when no nodes under decomissioning > > > Key: HDFS-10581 > URL: https://issues.apache.org/jira/browse/HDFS-10581 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, ui >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ui, web-ui > Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.2.jpg, > after.jpg, before.jpg > > > A minor user experience Improvement on namenode UI. Propose to improve it > from [^before.jpg] to [^after.jpg]. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717154#comment-15717154 ] Weiwei Yang commented on HDFS-11166: Sure thanks @Mingliang liu and [~ajisakaa] > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717152#comment-15717152 ] Weiwei Yang commented on HDFS-11156: Thanks [~liuml07] :) > Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API > > > Key: HDFS-11156 > URL: https://issues.apache.org/jira/browse/HDFS-11156 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, > HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, > HDFS-11156.06.patch > > > Following webhdfs REST API > {code} > http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1 > {code} > will get a response like > {code} > { > "LocatedBlocks" : { > "fileLength" : 1073741824, > "isLastBlockComplete" : true, > "isUnderConstruction" : false, > "lastLocatedBlock" : { ... }, > "locatedBlocks" : [ {...} ] > } > } > {code} > This represents for *o.a.h.h.p.LocatedBlocks*. However according to > *FileSystem* API, > {code} > public BlockLocation[] getFileBlockLocations(Path p, long start, long len) > {code} > clients would expect an array of BlockLocation. This mismatch should be > fixed. Marked as Incompatible change as this will change the output of the > GET_BLOCK_LOCATIONS API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning
[ https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717053#comment-15717053 ] Junping Du commented on HDFS-10581: --- Remove elements in UI is not a trivial thing, especially some UI test could check things for consistent user experience across release to release. Comparing with confusion from people, redundant is less guilty. What other HDFS guys think? CC[~jingzhao], [~arpitagarwal] and [~andrew.wang]. > Redundant table on Datanodes page when no nodes under decomissioning > > > Key: HDFS-10581 > URL: https://issues.apache.org/jira/browse/HDFS-10581 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, ui >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ui, web-ui > Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.2.jpg, > after.jpg, before.jpg > > > A minor user experience Improvement on namenode UI. Propose to improve it > from [^before.jpg] to [^after.jpg]. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717048#comment-15717048 ] Xiao Chen commented on HDFS-11197: -- Thanks for the new rev [~wchevreuil]. Please fix the left whitespace. Also a nit: {{assertEquals}} takes the parameter as {{expected}}, {{actual}}. This will be helpful when test fails. +1 after the above fixes. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch, HDFS-11197-5.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10581) Redundant table on Datanodes page when no nodes under decomissioning
[ https://issues.apache.org/jira/browse/HDFS-10581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Junping Du updated HDFS-10581: -- Priority: Major (was: Trivial) > Redundant table on Datanodes page when no nodes under decomissioning > > > Key: HDFS-10581 > URL: https://issues.apache.org/jira/browse/HDFS-10581 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs, ui >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Labels: ui, web-ui > Attachments: HDFS-10581.001.patch, HDFS-10581.002.patch, after.2.jpg, > after.jpg, before.jpg > > > A minor user experience Improvement on namenode UI. Propose to improve it > from [^before.jpg] to [^after.jpg]. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8971) Remove guards when calling LOG.debug() and LOG.trace() in client package
[ https://issues.apache.org/jira/browse/HDFS-8971?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717023#comment-15717023 ] ASF GitHub Bot commented on HDFS-8971: -- Github user liuml07 commented on the issue: https://github.com/apache/hadoop/pull/46 Can you create a JIRA on https://issues.apache.org/jira/browse/HDFS and change the title of this PR to link the JIRA here? See https://wiki.apache.org/hadoop/GithubIntegration for more information. For the code, the DataNode logging has switched to slf4j. Please use placeholders and guard statement like `if (LOG.isTraceEnabled())` is unnecessary. Please refer to https://issues.apache.org/jira/browse/HDFS-8971 for examples. > Remove guards when calling LOG.debug() and LOG.trace() in client package > > > Key: HDFS-8971 > URL: https://issues.apache.org/jira/browse/HDFS-8971 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: build >Reporter: Mingliang Liu >Assignee: Mingliang Liu > Fix For: 2.8.0, 3.0.0-alpha1 > > Attachments: HDFS-8971.000.patch, HDFS-8971.001.patch > > > We moved the {{shortcircuit}} package from {{hadoop-hdfs}} to > {{hadoop-hdfs-client}} module in JIRA > [HDFS-8934|https://issues.apache.org/jira/browse/HDFS-8934] and > [HDFS-8951|https://issues.apache.org/jira/browse/HDFS-8951], and > {{BlockReader}} in > [HDFS-8925|https://issues.apache.org/jira/browse/HDFS-8925]. Meanwhile, we > also replaced the _log4j_ log with _slf4j_ logger. There were existing code > in the client package to guard the log when calling {{LOG.debug()}} and > {{LOG.trace()}}, e.g. in {{ShortCircuitCache.java}}, we have code like this: > {code:title=Trace with guards|borderStyle=solid} > 724if (LOG.isTraceEnabled()) { > 725 LOG.trace(this + ": found waitable for " + key); > 726} > {code} > In _slf4j_, this kind of guard is not necessary. We should clean the code by > removing the guard from the client package. > {code:title=Trace without guards|borderStyle=solid} > 724LOG.trace("{}: found waitable for {}", this, key); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10917) Collect peer performance statistics on DataNode.
[ https://issues.apache.org/jira/browse/HDFS-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717012#comment-15717012 ] Hadoop QA commented on HDFS-10917: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 57s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 59s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 18s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 43s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 25s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 16 new + 134 unchanged - 1 fixed = 150 total (was 135) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 68m 54s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 89m 40s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure160 | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-10917 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841580/HDFS-10917.001.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 0aa7fb924323 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51211a7 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/17747/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17747/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17747/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17747/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17747/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15717004#comment-15717004 ] Hadoop QA commented on HDFS-11197: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 22s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 9m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 0s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 17s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 8s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 58s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 13s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 90m 47s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 19s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}115m 39s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11197 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841576/HDFS-11197-5.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux cc31b9b0c7ce 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 51211a7 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/17746/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/17746/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17746/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17746/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 >
[jira] [Commented] (HDFS-11188) Change min supported DN and NN versions back to 2.x
[ https://issues.apache.org/jira/browse/HDFS-11188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716956#comment-15716956 ] Yongjun Zhang commented on HDFS-11188: -- Hi [~andrew.wang], Thanks for working on. The 2.1.0-beta was set by https://issues.apache.org/jira/browse/HDFS-5083. Are we sure we can upgrade from 2.1.0-beta to 3.0? Hi [~kihwal], would you please comment since you worked out HDFS-5083? Thanks. > Change min supported DN and NN versions back to 2.x > --- > > Key: HDFS-11188 > URL: https://issues.apache.org/jira/browse/HDFS-11188 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rolling upgrades >Affects Versions: 3.0.0-alpha1 >Reporter: Andrew Wang >Assignee: Andrew Wang >Priority: Critical > Attachments: HDFS-11188.001.patch > > > This is the inverse of HDFS-10398 and HADOOP-13142. Currently, trunk requires > a software DN and NN version of 3.0.0-alpha1. This means we cannot perform a > rolling upgrade from 2.x to 3.x. > The first step towards supporting rolling upgrade is changing these back to a > 2.x version. For reference, branch-2 has these versions set to "2.1.0-beta". -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Status: Patch Available (was: In Progress) > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch, HDFS-11197-5.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Status: In Progress (was: Patch Available) > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch, HDFS-11197-5.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716886#comment-15716886 ] Mingliang Liu commented on HDFS-11094: -- For the protocol changes, end-to-end tests are very helpful. Starting a mini dfs cluster is not very expensive; I can usually finish start and shutdown an empty mini cluster in 3~5 seconds on my dev machine. The first heartbeat will bypass the large interval; so 1) Choosing {{HAServiceStateProto}} instead of {{HAServiceStateProto}} makes sense as {{lastActiveClaimTxId}} will be updated in a timely manner, and we can save the complexity of updating it in this patch; 2) Unfortunately, current methods (e.g. set large config {{DFS_HEARTBEAT_INTERVAL_KEY}}, or {{DataNode#setHeartbeatsDisabledForTests()}}) are not working without change for testing this patch. I can accept that existing tests in patch are somehow adequate. So this will not block the progress of this patch. Thanks, > Send back HAState along with NamespaceInfo during a versionRequest as an > optional parameter > --- > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, > HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch, > HDFS-11094.006.patch, HDFS-11094.007.patch, HDFS-11094.008.patch, > HDFS-11094.009.patch > > > The datanode should know which NN is active when it is connecting/registering > to the NN. Currently, it only figures this out during its first (and > subsequent) heartbeat(s) and so there is a period of time where the datanode > is alive and registered, but can't actually do anything because it doesn't > know which NN is active. A byproduct of this is that the MiniDFSCluster will > become active before it knows what NN is active, which can lead to NPEs when > calling getActiveNN(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10913: - Status: Open (was: Patch Available) > Introduce fault injectors to simulate slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, > HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, > HDFS-10913.005.patch, HDFS-10913.006.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10913) Introduce fault injectors to simulate slow mirrors
[ https://issues.apache.org/jira/browse/HDFS-10913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10913: - Status: Patch Available (was: Open) > Introduce fault injectors to simulate slow mirrors > -- > > Key: HDFS-10913 > URL: https://issues.apache.org/jira/browse/HDFS-10913 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10913.000.patch, HDFS-10913.001.patch, > HDFS-10913.002.patch, HDFS-10913.003.patch, HDFS-10913.004.patch, > HDFS-10913.005.patch, HDFS-10913.006.patch > > > BlockReceiver#datanodeSlowLogThresholdMs is used as threshold to detect slow > mirrors. BlockReceiver only writes some warning logs. In order to better test > behaviors of slow mirrors, it necessitates introducing fault injectors. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10917) Collect peer performance statistics on DataNode.
[ https://issues.apache.org/jira/browse/HDFS-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716819#comment-15716819 ] Xiaobing Zhou commented on HDFS-10917: -- v001 added two more tests. * testSyncWriterOsCacheTimeNanosMetrics * testWriteDataToDiskTimeNanosMetrics > Collect peer performance statistics on DataNode. > > > Key: HDFS-10917 > URL: https://issues.apache.org/jira/browse/HDFS-10917 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10917.000.patch, HDFS-10917.001.patch > > > DataNodes already detect if replication pipeline operations are slow and log > warnings. For the purpose of analysis, performance metrics are desirable. > This proposes adding them on DataNodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-10917) Collect peer performance statistics on DataNode.
[ https://issues.apache.org/jira/browse/HDFS-10917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaobing Zhou updated HDFS-10917: - Attachment: HDFS-10917.001.patch > Collect peer performance statistics on DataNode. > > > Key: HDFS-10917 > URL: https://issues.apache.org/jira/browse/HDFS-10917 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode >Affects Versions: 2.8.0 >Reporter: Xiaobing Zhou >Assignee: Xiaobing Zhou > Attachments: HDFS-10917.000.patch, HDFS-10917.001.patch > > > DataNodes already detect if replication pipeline operations are slow and log > warnings. For the purpose of analysis, performance metrics are desirable. > This proposes adding them on DataNodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Attachment: HDFS-11197-5.patch Thanks for confirming on this, [~xiaochen]! I'm submitting another patch with the whitespace fix, plus the code conventions changes on the test class. Please let me know if you have any additional comments on this. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch, HDFS-11197-5.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716729#comment-15716729 ] Eric Badger commented on HDFS-11094: bq. I think the existing tests are quite adequate. I understand that a full-blown mini cluster is sometimes needed to test the distributed file system. However, we should avoid adding such end-to-end tests if it is possible to have reasonable unit tests. Upon looking at this again, I agree with [~kihwal]. I don't think that it is necessary for us to use a minicluster in this case. The current tests are adequate IMO since they test the methods that are directly used on either side of the version request. Additionally, the minicluster is expensive and creating a unit test with the minicluster would be difficult in this case since it requires a heartbeat to get out of its build() method (though difficulty is not my main objection). > Send back HAState along with NamespaceInfo during a versionRequest as an > optional parameter > --- > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, > HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch, > HDFS-11094.006.patch, HDFS-11094.007.patch, HDFS-11094.008.patch, > HDFS-11094.009.patch > > > The datanode should know which NN is active when it is connecting/registering > to the NN. Currently, it only figures this out during its first (and > subsequent) heartbeat(s) and so there is a period of time where the datanode > is alive and registered, but can't actually do anything because it doesn't > know which NN is active. A byproduct of this is that the MiniDFSCluster will > become active before it knows what NN is active, which can lead to NPEs when > calling getActiveNN(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11094) Send back HAState along with NamespaceInfo during a versionRequest as an optional parameter
[ https://issues.apache.org/jira/browse/HDFS-11094?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716687#comment-15716687 ] Kihwal Lee commented on HDFS-11094: --- bq. For the unit test, can we set a very large heartbeat interval in configuration, and check the active NN is not null after cluster.waitForActive()? I think the existing tests are quite adequate. I understand that a full-blown mini cluster is sometimes needed to test the distributed file system. However, we should avoid adding such end-to-end tests if it is possible to have reasonable unit tests. > Send back HAState along with NamespaceInfo during a versionRequest as an > optional parameter > --- > > Key: HDFS-11094 > URL: https://issues.apache.org/jira/browse/HDFS-11094 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode >Reporter: Eric Badger >Assignee: Eric Badger > Attachments: HDFS-11094.001.patch, HDFS-11094.002.patch, > HDFS-11094.003.patch, HDFS-11094.004.patch, HDFS-11094.005.patch, > HDFS-11094.006.patch, HDFS-11094.007.patch, HDFS-11094.008.patch, > HDFS-11094.009.patch > > > The datanode should know which NN is active when it is connecting/registering > to the NN. Currently, it only figures this out during its first (and > subsequent) heartbeat(s) and so there is a period of time where the datanode > is alive and registered, but can't actually do anything because it doesn't > know which NN is active. A byproduct of this is that the MiniDFSCluster will > become active before it knows what NN is active, which can lead to NPEs when > calling getActiveNN(). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716398#comment-15716398 ] Mingliang Liu commented on HDFS-11166: -- I can also have a look at next week. If [~ajisakaa] can review, that will be great. Thanks, > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11160) VolumeScanner reports write-in-progress replicas as corrupt incorrectly
[ https://issues.apache.org/jira/browse/HDFS-11160?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716177#comment-15716177 ] Wei-Chiu Chuang commented on HDFS-11160: The checkstyle warning is unrelated. The findbug warning is likely a false positive. > VolumeScanner reports write-in-progress replicas as corrupt incorrectly > --- > > Key: HDFS-11160 > URL: https://issues.apache.org/jira/browse/HDFS-11160 > Project: Hadoop HDFS > Issue Type: Bug > Components: datanode > Environment: CDH5.7.4 >Reporter: Wei-Chiu Chuang >Assignee: Wei-Chiu Chuang > Attachments: HDFS-11160.001.patch, HDFS-11160.002.patch, > HDFS-11160.003.patch, HDFS-11160.reproduce.patch > > > Due to a race condition initially reported in HDFS-6804, VolumeScanner may > erroneously detect good replicas as corrupt. This is serious because in some > cases it results in data loss if all replicas are declared corrupt. This bug > is especially prominent when there are a lot of append requests via > HttpFs/WebHDFS. > We are investigating an incidence that caused very high block corruption rate > in a relatively small cluster. Initially, we thought HDFS-11056 is to blame. > However, after applying HDFS-11056, we are still seeing VolumeScanner > reporting corrupt replicas. > It turns out that if a replica is being appended while VolumeScanner is > scanning it, VolumeScanner may use the new checksum to compare against old > data, causing checksum mismatch. > I have a unit test to reproduce the error. Will attach later. A quick and > simple fix is to hold FsDatasetImpl lock and read from disk the checksum. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10675) Datanode support to read from external stores.
[ https://issues.apache.org/jira/browse/HDFS-10675?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716167#comment-15716167 ] Ewan Higgs commented on HDFS-10675: --- {quote} The proposal to handle this is to change the return type of the above functions to Optional. This will lead to changes in a lot of other places but will make it explicit that the return value can be null. I propose to open a new JIRA for this. {quote} Hi [~virajith]. I think this makes sense as it keeps the patches specific to the thing they are trying to solve. Would you like to create the ticket under HDFS-9806? > Datanode support to read from external stores. > --- > > Key: HDFS-10675 > URL: https://issues.apache.org/jira/browse/HDFS-10675 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Virajith Jalaparti > Attachments: HDFS-10675-HDFS-9806.001.patch, > HDFS-10675-HDFS-9806.002.patch, HDFS-10675-HDFS-9806.003.patch > > > This JIRA introduces a new {{PROVIDED}} {{StorageType}} to represent external > stores, along with enabling the Datanode to read from such stores using a > {{ProvidedReplica}} and a {{ProvidedVolume}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mingliang Liu updated HDFS-11156: - Resolution: Fixed Hadoop Flags: Reviewed Fix Version/s: 3.0.0-alpha2 2.8.0 Status: Resolved (was: Patch Available) +1 Committed to {{trunk}} through {{branch-2.8}} branches; fixed trivial cherry-pick conflicts. > Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API > > > Key: HDFS-11156 > URL: https://issues.apache.org/jira/browse/HDFS-11156 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, > HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, > HDFS-11156.06.patch > > > Following webhdfs REST API > {code} > http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1 > {code} > will get a response like > {code} > { > "LocatedBlocks" : { > "fileLength" : 1073741824, > "isLastBlockComplete" : true, > "isUnderConstruction" : false, > "lastLocatedBlock" : { ... }, > "locatedBlocks" : [ {...} ] > } > } > {code} > This represents for *o.a.h.h.p.LocatedBlocks*. However according to > *FileSystem* API, > {code} > public BlockLocation[] getFileBlockLocations(Path p, long start, long len) > {code} > clients would expect an array of BlockLocation. This mismatch should be > fixed. Marked as Incompatible change as this will change the output of the > GET_BLOCK_LOCATIONS API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15716066#comment-15716066 ] Hudson commented on HDFS-11156: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #10930 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/10930/]) HDFS-11156. Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API. (liuml07: rev c7ff34f8dcca3a2024230c5383abd9299daa1b20) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/web/JsonUtil.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/WebHdfsFileSystem.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/web/resources/NamenodeWebHdfsMethods.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/resources/GetOpParam.java * (edit) hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/web/JsonUtilClient.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/web/TestWebHDFS.java > Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API > > > Key: HDFS-11156 > URL: https://issues.apache.org/jira/browse/HDFS-11156 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, > HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, > HDFS-11156.06.patch > > > Following webhdfs REST API > {code} > http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1 > {code} > will get a response like > {code} > { > "LocatedBlocks" : { > "fileLength" : 1073741824, > "isLastBlockComplete" : true, > "isUnderConstruction" : false, > "lastLocatedBlock" : { ... }, > "locatedBlocks" : [ {...} ] > } > } > {code} > This represents for *o.a.h.h.p.LocatedBlocks*. However according to > *FileSystem* API, > {code} > public BlockLocation[] getFileBlockLocations(Path p, long start, long len) > {code} > clients would expect an array of BlockLocation. This mismatch should be > fixed. Marked as Incompatible change as this will change the output of the > GET_BLOCK_LOCATIONS API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy
[ https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715959#comment-15715959 ] Hadoop QA commented on HDFS-11193: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 7s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 33s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 4s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 44s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} HDFS-10285 passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 46s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 42s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 10 new + 134 unchanged - 2 fixed = 144 total (was 136) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 48s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 59m 15s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 20s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 83m 37s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.TestFileChecksum | | | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots | | | hadoop.hdfs.server.namenode.TestStoragePolicySatisfierWithStripedFile | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:9560f25 | | JIRA Issue | HDFS-11193 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841532/HDFS-11193-HDFS-10285-00.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 433f2e3fbf61 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | HDFS-10285 / 39f7a49 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17745/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/17745/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17745/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17745/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was autom
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715913#comment-15715913 ] Xiao Chen commented on HDFS-11197: -- Seeing the edit comment, I think it's fine since the new test verifies what's failing before to be passing. Thanks. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715910#comment-15715910 ] Xiao Chen commented on HDFS-11197: -- Thanks for the prompt response Welllington. I was thinking about the whole scenarios in the description of this jira: create zone -> create snapshot -> remove dir -> list zone. Do you think it makes sense to add that? > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715882#comment-15715882 ] Wellington Chevreuil edited comment on HDFS-11197 at 12/2/16 6:30 PM: -- Hi [~xiaochen]. Thanks for the review! I'll apply the suggested, and generate another patch. For the the scenario from {{TestEncryptionZones#testSnapshotsOnEncryptionZones}}, I had already added a new test case {{TestEncryptionZoneManager#testListEncryptionZonesForRoot}} to cover similar situation at {{TestEncryptionZoneManager}} level. Do you think it's worth add these other tests to {{TestEncryptionZones}} also? was (Author: wchevreuil): Hi [~xiaochen]. Thanks for the review! I'll apply the suggested, and generate another patch. For the the scenario from {{TestEncryptionZones#testSnapshotsOnEncryptionZones}}, I had already added a new test case {{TestEncryptionZoneManager#testListEncryptionZonesForRoot}} to cover similar situation at {{TestEncryptionZoneManager}} level. Originally, I had not thought of that possibility. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715882#comment-15715882 ] Wellington Chevreuil commented on HDFS-11197: - Hi [~xiaochen]. Thanks for the review! I'll apply the suggested, and generate another patch. For the the scenario from {{TestEncryptionZones#testSnapshotsOnEncryptionZones}}, I had already added a new test case {{TestEncryptionZoneManager#testListEncryptionZonesForRoot}} to cover similar situation at {{TestEncryptionZoneManager}} level. Originally, I had not thought of that possibility. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715850#comment-15715850 ] Hadoop QA commented on HDFS-11156: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 30s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 35s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 44s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 0s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 33s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 32s{color} | {color:orange} hadoop-hdfs-project: The patch generated 2 new + 237 unchanged - 0 fixed = 239 total (was 237) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 4s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 3s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red} 70m 20s{color} | {color:red} hadoop-hdfs in the patch failed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 21s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}102m 20s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.datanode.TestFsDatasetCache | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11156 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841521/HDFS-11156.06.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux c4f8def85373 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/17744/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17744/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project.txt | | unit | https://builds.apache.org/job/PreCommi
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715818#comment-15715818 ] Xiao Chen commented on HDFS-11197: -- Thanks a lot [~wchevreuil] for reporting the issue and posting the fix! Idea looks good, some minor comments: - When possible, [assertEquals|http://junit.sourceforge.net/javadoc/org/junit/Assert.html#assertEquals(double, double)] is preferred over assertTrue, since they clearly says what's expected and what's actual when failing - No need to catch and exception and {{Assert.fail}} - just let it throw and junit will fail the tests with this exception - The mocks are good to validate the change. But also feeling worth having the scenario you mentioned in the description added as a unit test. Maybe some additional steps at the end of {{TestEncryptionZones#testSnapshotsOnEncryptionZones}}, or a new test case. Your call. - Trivially comment you could use static imports to save the explicit class names (e.g. {{Assert.}}, {{Mockito.}}) on each call. - please fix the whitespace - findbugs is unrelated to this change. Seems to be HDFS-10930 > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy
[ https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11193: Attachment: HDFS-11193-HDFS-10285-00.patch Attach initial patch where I've done the following changes: # Added checks to handle the striped blocks. For a striped block, it needs to construct internal block at the given index of a block group. # Done refactoring of {{chooseTargetTypeInSameNode}} function. This has been done to avoid choosing a target storage type node which already has the same block. This is a generic case, which is common for continuous blocks and striped blocks. For example, there are 3 nodes A(disk, disk), B(disk, disk), C(disk, archive) and assume a block with storage locations A(disk), B(disk), C(disk). Now, set policy as {{COLD}} and invoked {{#satisfyStoragePolicy}}, while choosing the target node for A, it shouldn't choose C. > [SPS]: Erasure coded files should be considered for satisfying storage policy > - > > Key: HDFS-11193 > URL: https://issues.apache.org/jira/browse/HDFS-11193 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11193-HDFS-10285-00.patch > > > Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. > {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider > all immediate files under that directory and need to check that, the files > really matching with namespace storage policy. All the mismatched striped > blocks should be chosen for block movement. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11193) [SPS]: Erasure coded files should be considered for satisfying storage policy
[ https://issues.apache.org/jira/browse/HDFS-11193?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rakesh R updated HDFS-11193: Status: Patch Available (was: Open) > [SPS]: Erasure coded files should be considered for satisfying storage policy > - > > Key: HDFS-11193 > URL: https://issues.apache.org/jira/browse/HDFS-11193 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode >Reporter: Rakesh R >Assignee: Rakesh R > Attachments: HDFS-11193-HDFS-10285-00.patch > > > Erasure coded striped files supports storage policies {{HOT, COLD, ALLSSD}}. > {{HdfsAdmin#satisfyStoragePolicy}} API call on a directory should consider > all immediate files under that directory and need to check that, the files > really matching with namespace storage policy. All the mismatched striped > blocks should be chosen for block movement. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11180) Intermittent deadlock in NameNode when failover happens.
[ https://issues.apache.org/jira/browse/HDFS-11180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715756#comment-15715756 ] Akira Ajisaka commented on HDFS-11180: -- I'll commit the branch-2.7 patch tomorrow if there are no objections. > Intermittent deadlock in NameNode when failover happens. > > > Key: HDFS-11180 > URL: https://issues.apache.org/jira/browse/HDFS-11180 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.6.0 >Reporter: Abhishek Modi >Assignee: Akira Ajisaka >Priority: Blocker > Labels: high-availability > Fix For: 2.8.0, 3.0.0-alpha2 > > Attachments: HDFS-11180-branch-2.01.patch, > HDFS-11180-branch-2.6.01.patch, HDFS-11180-branch-2.7.01.patch, > HDFS-11180-branch-2.8.01.patch, HDFS-11180.00.patch, HDFS-11180.01.patch, > HDFS-11180.02.patch, HDFS-11180.03.patch, HDFS-11180.04.patch, jstack.log > > > It is happening due to metrics getting updated at the same time when failover > is happening. Please find attached jstack at that point of time. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11146) Excess replicas will not be deleted until all storages's FBR received after failover
[ https://issues.apache.org/jira/browse/HDFS-11146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715608#comment-15715608 ] Kihwal Lee commented on HDFS-11146: --- Sorry, I am busy and won't be able to review it properly soon. I will probably get to it next week. I would pay close attention to compatibility, interactions with block report lease, etc. > Excess replicas will not be deleted until all storages's FBR received after > failover > > > Key: HDFS-11146 > URL: https://issues.apache.org/jira/browse/HDFS-11146 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Brahma Reddy Battula >Assignee: Brahma Reddy Battula > Attachments: HDFS-11146.patch > > > Excess replicas will not be deleted until all storages's FBR received after > failover. > Thinking following soultion can help. > *Solution:* > I think after failover, As DNs aware of failover ,so they can send another > block report (FBR) irrespective of interval.May be some shuffle can be done, > similar to initial delay. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715559#comment-15715559 ] Weiwei Yang commented on HDFS-11156: Uploaded v6 patch to address a minor change on the comments for "GET_BLOCK_LOCATIONS" according to the discussion [here|https://issues.apache.org/jira/browse/HDFS-11166?focusedCommentId=15712909&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15712909] > Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API > > > Key: HDFS-11156 > URL: https://issues.apache.org/jira/browse/HDFS-11156 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, > HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, > HDFS-11156.06.patch > > > Following webhdfs REST API > {code} > http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1 > {code} > will get a response like > {code} > { > "LocatedBlocks" : { > "fileLength" : 1073741824, > "isLastBlockComplete" : true, > "isUnderConstruction" : false, > "lastLocatedBlock" : { ... }, > "locatedBlocks" : [ {...} ] > } > } > {code} > This represents for *o.a.h.h.p.LocatedBlocks*. However according to > *FileSystem* API, > {code} > public BlockLocation[] getFileBlockLocations(Path p, long start, long len) > {code} > clients would expect an array of BlockLocation. This mismatch should be > fixed. Marked as Incompatible change as this will change the output of the > GET_BLOCK_LOCATIONS API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715557#comment-15715557 ] Hadoop QA commented on HDFS-11166: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 53s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 50s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 16s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 9m 22s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11166 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841519/HDFS-11166.01.patch | | Optional Tests | asflicense mvnsite | | uname | Linux 5e32559c6908 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17743/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11132) Allow AccessControlException in contract tests when getFileStatus on subdirectory of existing files
[ https://issues.apache.org/jira/browse/HDFS-11132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=1571#comment-1571 ] Vishwajeet Dusane commented on HDFS-11132: -- Thanks a lot [~liuml07] for quick turnaround and pushing this patch through. > Allow AccessControlException in contract tests when getFileStatus on > subdirectory of existing files > --- > > Key: HDFS-11132 > URL: https://issues.apache.org/jira/browse/HDFS-11132 > Project: Hadoop HDFS > Issue Type: Bug > Components: test >Reporter: Vishwajeet Dusane >Assignee: Vishwajeet Dusane > Fix For: 3.0.0-alpha2 > > Attachments: HDFS-11132.001.patch > > > Azure data lake file system supports traversal access on file/folder and > demands execute permission on parent for {{getFileStatus}} access. Ref > HDFS-9552. > {{testMkdirsFailsForSubdirectoryOfExistingFile}} contract test expectation > fails with {{AcccessControlException}} when {{exists(...)}} check for > sub-directory present under file. > Expected : {{exists(...)}} to handle {{AcccessControlException}} and ignore > during the check for sub-directory present under file. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15713728#comment-15713728 ] Weiwei Yang edited comment on HDFS-11166 at 12/2/16 4:26 PM: - Hello [~liuml07], [~andrew.wang] I uploaded a patch to add documents for GETFILEBLOCKLOCATIONS, since like what has been discussed GET_BLOCK_LOCATIONS is a private API, I will not add any doc in public. So I modified the comment for GET_BLOCK_LOCATIONS in the v6 patch of HDFS-11156 like following {code} /** * GET_BLOCK_LOCATIONS is a private/stable API op. It returns a * {@link org.apache.hadoop.hdfs.protocol.LocatedBlocks} * json object. */ GET_BLOCK_LOCATIONS(false, HttpURLConnection.HTTP_OK), {code} This ticket will be used to track pure doc changes. Hope it makes sense. was (Author: cheersyang): Hello [~liuml07], [~andrew.wang] The motivation of this ticket was to add the missing docs for get block location rest api in web hdfs, I created this right after HDFS-11156. Since after discussions, HDFS-11156 will not change existing api, this one needs also be updated. So I will upload a patch to update docs do following # Add GETFILEBLOCKLOCATIONS doc in webhdfs # Mark GET_BLOCK_LOCATIONS to private/stable in order to reduce confusions > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11156) Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API
[ https://issues.apache.org/jira/browse/HDFS-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11156: --- Attachment: HDFS-11156.06.patch > Add new op GETFILEBLOCKLOCATIONS to WebHDFS REST API > > > Key: HDFS-11156 > URL: https://issues.apache.org/jira/browse/HDFS-11156 > Project: Hadoop HDFS > Issue Type: Improvement > Components: webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11156.01.patch, HDFS-11156.02.patch, > HDFS-11156.03.patch, HDFS-11156.04.patch, HDFS-11156.05.patch, > HDFS-11156.06.patch > > > Following webhdfs REST API > {code} > http://:/webhdfs/v1/?op=GET_BLOCK_LOCATIONS&offset=0&length=1 > {code} > will get a response like > {code} > { > "LocatedBlocks" : { > "fileLength" : 1073741824, > "isLastBlockComplete" : true, > "isUnderConstruction" : false, > "lastLocatedBlock" : { ... }, > "locatedBlocks" : [ {...} ] > } > } > {code} > This represents for *o.a.h.h.p.LocatedBlocks*. However according to > *FileSystem* API, > {code} > public BlockLocation[] getFileBlockLocations(Path p, long start, long len) > {code} > clients would expect an array of BlockLocation. This mismatch should be > fixed. Marked as Incompatible change as this will change the output of the > GET_BLOCK_LOCATIONS API. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11166: --- Status: Patch Available (was: Open) > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11166) Add webhdfs GETFILEBLOCKLOCATIONS document
[ https://issues.apache.org/jira/browse/HDFS-11166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Weiwei Yang updated HDFS-11166: --- Attachment: HDFS-11166.01.patch > Add webhdfs GETFILEBLOCKLOCATIONS document > -- > > Key: HDFS-11166 > URL: https://issues.apache.org/jira/browse/HDFS-11166 > Project: Hadoop HDFS > Issue Type: Bug > Components: documentation, webhdfs >Affects Versions: 2.7.3 >Reporter: Weiwei Yang >Assignee: Weiwei Yang > Attachments: HDFS-11166.01.patch > > > HDFS-11156 adds GETFILEBLOCKLOCATIONS in webhdfs, user can uses this http > request to get a array of BlockLocation json output. This ticket is to track > the doc updates in WebHDFS.md -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker
[ https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715298#comment-15715298 ] Rakesh R commented on HDFS-8411: Thanks [~Sammi] for the continuous effort. Its nearing completion, it would be good to fix the checkstyle warning too. {code} ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/metrics/DataNodeMetrics.java:21:import static org.apache.hadoop.metrics2.lib.Interns.info;:15: Unused import - org.apache.hadoop.metrics2.lib.Interns.info. {code} > Add bytes count metrics to datanode for ECWorker > > > Key: HDFS-8411 > URL: https://issues.apache.org/jira/browse/HDFS-8411 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Li Bo >Assignee: SammiChen > Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, > HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, > HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch > > > This is a sub task of HDFS-7674. It calculates the amount of data that is > read from local or remote to attend decoding work, and also the amount of > data that is written to local or remote datanodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Kihwal Lee updated HDFS-11198: -- Assignee: Weiwei Yang > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715214#comment-15715214 ] Kihwal Lee commented on HDFS-11198: --- sure go ahead. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Assignee: Weiwei Yang >Priority: Critical > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker
[ https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15715103#comment-15715103 ] Hadoop QA commented on HDFS-8411: - | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 16s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 27s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 54s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 14s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 49s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 44s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 24s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 3 new + 74 unchanged - 0 fixed = 77 total (was 74) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 38s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 88m 37s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 22s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black}109m 30s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-8411 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841460/HDFS-8411-008.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux b0e49fb9d589 3.13.0-96-generic #143-Ubuntu SMP Mon Aug 29 20:15:20 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/17742/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | checkstyle | https://builds.apache.org/job/PreCommit-HDFS-Build/17742/artifact/patchprocess/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17742/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17742/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Add bytes count metrics to datanode for ECWorker > > > Key: HDFS-8411 > URL: https://issues.apache.org/
[jira] [Commented] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15714979#comment-15714979 ] Hadoop QA commented on HDFS-11197: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s{color} | {color:blue} Docker mode activated. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 12s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 45s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 12s{color} | {color:green} trunk passed {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 41s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs in trunk has 1 extant Findbugs warnings. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 39s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 54s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 53s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 10s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s{color} | {color:red} The patch has 1 line(s) that end in whitespace. Use git apply --whitespace=fix <>. Refer https://git-scm.com/docs/git-apply {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 55s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 61m 40s{color} | {color:green} hadoop-hdfs in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 18s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 81m 7s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Image:yetus/hadoop:a9ad5d6 | | JIRA Issue | HDFS-11197 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12841447/HDFS-11197-4.patch | | Optional Tests | asflicense compile javac javadoc mvninstall mvnsite unit findbugs checkstyle | | uname | Linux 837aca05ca18 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh | | git revision | trunk / c87b3a4 | | Default Java | 1.8.0_111 | | findbugs | v3.0.0 | | findbugs | https://builds.apache.org/job/PreCommit-HDFS-Build/17741/artifact/patchprocess/branch-findbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html | | whitespace | https://builds.apache.org/job/PreCommit-HDFS-Build/17741/artifact/patchprocess/whitespace-eol.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/17741/testReport/ | | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/17741/console | | Powered by | Apache Yetus 0.4.0-SNAPSHOT http://yetus.apache.org | This message was automatically generated. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 >
[jira] [Commented] (HDFS-11198) NN UI should link DN web address using hostnames
[ https://issues.apache.org/jira/browse/HDFS-11198?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15714918#comment-15714918 ] Weiwei Yang commented on HDFS-11198: Hello [~kihwal] I think I added DN links in HDFS-10493, I can work on this one if you don't mind. > NN UI should link DN web address using hostnames > > > Key: HDFS-11198 > URL: https://issues.apache.org/jira/browse/HDFS-11198 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: 2.8.0 >Reporter: Kihwal Lee >Priority: Critical > > The new NN UI shows links to DN web pages, but since the link is from the > info address returned from jmx, it is in the IP address:port form. This > breaks if users are using filters utilizing cookies. > Since this is a new feature in 2.8, I didn't mark it as a blocker. I.e. it > does not break any existing functions. It just doesn't work properly in > certain environments. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-8411) Add bytes count metrics to datanode for ECWorker
[ https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15714870#comment-15714870 ] SammiChen commented on HDFS-8411: - Thanks [~rakeshr], It's a good point. I will upload a new patch. Besides the {{ecReconstructionBytesRead}} and {{ecReconstructionBytesWritten}}, I will also change {{ecDecodingTimeNanos}} . > Add bytes count metrics to datanode for ECWorker > > > Key: HDFS-8411 > URL: https://issues.apache.org/jira/browse/HDFS-8411 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Li Bo >Assignee: SammiChen > Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, > HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, > HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch > > > This is a sub task of HDFS-7674. It calculates the amount of data that is > read from local or remote to attend decoding work, and also the amount of > data that is written to local or remote datanodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-8411) Add bytes count metrics to datanode for ECWorker
[ https://issues.apache.org/jira/browse/HDFS-8411?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] SammiChen updated HDFS-8411: Attachment: HDFS-8411-008.patch > Add bytes count metrics to datanode for ECWorker > > > Key: HDFS-8411 > URL: https://issues.apache.org/jira/browse/HDFS-8411 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Li Bo >Assignee: SammiChen > Attachments: HDFS-8411-001.patch, HDFS-8411-002.patch, > HDFS-8411-003.patch, HDFS-8411-004.patch, HDFS-8411-005.patch, > HDFS-8411-006.patch, HDFS-8411-007.patch, HDFS-8411-008.patch > > > This is a sub task of HDFS-7674. It calculates the amount of data that is > read from local or remote to attend decoding work, and also the amount of > data that is written to local or remote datanodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Status: Patch Available (was: In Progress) > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15714609#comment-15714609 ] Wellington Chevreuil edited comment on HDFS-11197 at 12/2/16 9:35 AM: -- Just removing the extra logging message from *EncryptionZoneManager.listEncryptionZones* I had left on the previous patch. was (Author: wchevreuil): Just removing the extra logging message from "EncryptionZoneManager.listEncryptionZones" I had left on the previous patch. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Attachment: HDFS-11197-4.patch Just removing the extra logging message from "EncryptionZoneManager.listEncryptionZones" I had left on the previous patch. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch, HDFS-11197-4.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Attachment: HDFS-11197-3.patch For some reason, the changes I mentioned on my previous comment were not included in the last patch. Including a 3rd patch with the fix for the test and the additional unit test for checking this condition on TestEncryptionZoneManager. The findbugs warning does not seem related to any of the changes from this patch. > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-11197) Listing encryption zones fails when deleting a EZ that is on a snapshotted directory
[ https://issues.apache.org/jira/browse/HDFS-11197?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Wellington Chevreuil updated HDFS-11197: Status: In Progress (was: Patch Available) > Listing encryption zones fails when deleting a EZ that is on a snapshotted > directory > > > Key: HDFS-11197 > URL: https://issues.apache.org/jira/browse/HDFS-11197 > Project: Hadoop HDFS > Issue Type: Bug > Components: hdfs >Affects Versions: 2.6.0 >Reporter: Wellington Chevreuil >Assignee: Wellington Chevreuil >Priority: Minor > Attachments: HDFS-11197-1.patch, HDFS-11197-2.patch, > HDFS-11197-3.patch > > > If a EZ directory is under a snapshotable directory, and a snapshot has been > taking, then if this EZ is permanently deleted, it causes *hdfs crypto > listZones* command to fail without showing any of the still available zones. > This happens only after the EZ is removed from Trash folder. For example, > considering */test-snap* folder is snapshotable and there is already an > snapshot for it: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-1 my-key > $ hdfs dfs -rmr /test-snap/EZ-1 > INFO fs.TrashPolicyDefault: Moved: 'hdfs://ns1/test-snap/EZ-1' to trash at: > hdfs://ns1/user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > /user/systest my-key > /user/hdfs/.Trash/Current/test-snap/EZ-1 my-key > $ hdfs dfs -rmr /user/hdfs/.Trash/Current/test-snap/EZ-1 > Deleted /user/hdfs/.Trash/Current/test-snap/EZ-1 > $ hdfs crypto -listZones > RemoteException: Absolute path required > {noformat} > Once this error happens, *hdfs crypto -listZones* only works again if we > remove the snapshot: > {noformat} > $ hdfs dfs -deleteSnapshot /test-snap snap1 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > If we instead delete the EZ using *skipTrash* option, *hdfs crypto > -listZones* does not break: > {noformat} > $ hdfs crypto -listZones > /user/systest my-key > /test-snap/EZ-2 my-key > $ hdfs dfs -rmr -skipTrash /test-snap/EZ-2 > Deleted /test-snap/EZ-2 > $ hdfs crypto -listZones > /user/systest my-key > {noformat} > The different behaviour seems to be because when removing the EZ trash > folder, it's related INode is left with no parent INode. This causes > *EncryptionZoneManager.listEncryptionZones* to throw the seen error, when > trying to resolve the inodes in the given path. > Am proposing a patch that fixes this issue by simply performing an additional > check on *EncryptionZoneManager.listEncryptionZones* for the case an inode > has no parent, so that it would be skipped on the list without trying to > resolve it. Feedback on the proposal is appreciated. > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org