[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167704#comment-17167704 ] huangtianhua commented on HDFS-15025: - [~liuml07] Would you please help to review, thanks very much:) > Applying NVDIMM storage media to HDFS > - > > Key: HDFS-15025 > URL: https://issues.apache.org/jira/browse/HDFS-15025 > Project: Hadoop HDFS > Issue Type: New Feature > Components: datanode, hdfs >Reporter: hadoop_hdfs_hw >Priority: Major > Attachments: Applying NVDIMM to HDFS.pdf, HDFS-15025.001.patch, > HDFS-15025.002.patch, HDFS-15025.003.patch, HDFS-15025.004.patch, > HDFS-15025.005.patch, HDFS-15025.006.patch, NVDIMM_patch(WIP).patch > > > The non-volatile memory NVDIMM is faster than SSD, it can be used > simultaneously with RAM, DISK, SSD. The data of HDFS stored directly on > NVDIMM can not only improves the response rate of HDFS, but also ensure the > reliability of the data. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15502) Implement service-user feature in DecayRPCScheduler
Takanobu Asanuma created HDFS-15502: --- Summary: Implement service-user feature in DecayRPCScheduler Key: HDFS-15502 URL: https://issues.apache.org/jira/browse/HDFS-15502 Project: Hadoop HDFS Issue Type: Improvement Reporter: Takanobu Asanuma Assignee: Takanobu Asanuma In our cluster, we want to use FairCallQueue to limit heavy users, but not want to restrict certain users who are submitting important requests. This jira proposes to implement the service-user feature that the user is always scheduled high-priority queue. According to HADOOP-9640, the initial concept of FCQ has this feature, but not implemented finally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15502) Implement service-user feature in DecayRPCScheduler
[ https://issues.apache.org/jira/browse/HDFS-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167747#comment-17167747 ] Chao Sun commented on HDFS-15502: - [~tasanuma] seems this JIRA is very similar to HADOOP-15016? this also should be a HADOOP jira rather than HDFS. > Implement service-user feature in DecayRPCScheduler > --- > > Key: HDFS-15502 > URL: https://issues.apache.org/jira/browse/HDFS-15502 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > In our cluster, we want to use FairCallQueue to limit heavy users, but not > want to restrict certain users who are submitting important requests. This > jira proposes to implement the service-user feature that the user is always > scheduled high-priority queue. > According to HADOOP-9640, the initial concept of FCQ has this feature, but > not implemented finally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15502) Implement service-user feature in DecayRPCScheduler
[ https://issues.apache.org/jira/browse/HDFS-15502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167761#comment-17167761 ] Takanobu Asanuma commented on HDFS-15502: - [~csun] Oh, I didn't know that jira. thanks for letting me know. This jira may be a duplicate, but I just move it to HADOOP for now. > Implement service-user feature in DecayRPCScheduler > --- > > Key: HDFS-15502 > URL: https://issues.apache.org/jira/browse/HDFS-15502 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Takanobu Asanuma >Assignee: Takanobu Asanuma >Priority: Major > > In our cluster, we want to use FairCallQueue to limit heavy users, but not > want to restrict certain users who are submitting important requests. This > jira proposes to implement the service-user feature that the user is always > scheduled high-priority queue. > According to HADOOP-9640, the initial concept of FCQ has this feature, but > not implemented finally. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.
[ https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167831#comment-17167831 ] Stephen O'Donnell edited comment on HDFS-15493 at 7/30/20, 11:20 AM: - {quote} So, awaitTermination 1 ms would make executor shutdown quickly. {quote} I believe if you specify a timeout of 500ms, and the threads all finish in 5ms, the call will return after 5ms. Therefore setting it to 500 or 1000ms and logging a message each time around the loop should not give any time penalty, but should give us some information about what is happening. {quote} with the same fsimage, the time cost would increase to 430s with about 10s+ time to wait two executors shutdown. {quote} How long does the shutdown take with the single 4 thread executor? I cannot see how multiple threads help, as both the methods have a lock right at the start. If multiple threads make it faster, then it would suggest the time taken to pick the task from the queue and start it running is significant. Are you testing this on the trunk code + this patch, or a different version plus this patch? Could you try testing 2 executors with 2 threads each? was (Author: sodonnell): {quote} So, awaitTermination 1 ms would make executor shutdown quickly. {quote} I believe if you specify a timeout of 500ms, and the threads all finish in 5ms, the call will return. Therefore setting it to 500 or 1000ms and logging a message each time around the loop should not give any time penalty, but should give us some information about what is happening. {quote} with the same fsimage, the time cost would increase to 430s with about 10s+ time to wait two executors shutdown. {quote} How long does the shutdown take with the single 4 thread executor? I cannot see how multiple threads help, as both the methods have a lock right at the start. If multiple threads make it faster, then it would suggest the time taken to pick the task from the queue and start it running is significant. Are you testing this on the trunk code + this patch, or a different version plus this patch? Could you try testing 2 executors with 2 threads each? > Update block map and name cache in parallel while loading fsimage. > -- > > Key: HDFS-15493 > URL: https://issues.apache.org/jira/browse/HDFS-15493 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Chengwei Wang >Priority: Major > Attachments: HDFS-15493.001.patch, fsimage-loading.log > > > While loading INodeDirectorySection of fsimage, it will update name cache and > block map after added inode file to inode directory. It would reduce time > cost of fsimage loading to enable these steps run in parallel. > In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load > fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost > reduc to 410s. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.
[ https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167831#comment-17167831 ] Stephen O'Donnell commented on HDFS-15493: -- {quote} So, awaitTermination 1 ms would make executor shutdown quickly. {quote} I believe if you specify a timeout of 500ms, and the threads all finish in 5ms, the call will return. Therefore setting it to 500 or 1000ms and logging a message each time around the loop should not give any time penalty, but should give us some information about what is happening. {quote} with the same fsimage, the time cost would increase to 430s with about 10s+ time to wait two executors shutdown. {quote} How long does the shutdown take with the single 4 thread executor? I cannot see how multiple threads help, as both the methods have a lock right at the start. If multiple threads make it faster, then it would suggest the time taken to pick the task from the queue and start it running is significant. Are you testing this on the trunk code + this patch, or a different version plus this patch? Could you try testing 2 executors with 2 threads each? > Update block map and name cache in parallel while loading fsimage. > -- > > Key: HDFS-15493 > URL: https://issues.apache.org/jira/browse/HDFS-15493 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Chengwei Wang >Priority: Major > Attachments: HDFS-15493.001.patch, fsimage-loading.log > > > While loading INodeDirectorySection of fsimage, it will update name cache and > block map after added inode file to inode directory. It would reduce time > cost of fsimage loading to enable these steps run in parallel. > In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load > fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost > reduc to 410s. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15025) Applying NVDIMM storage media to HDFS
[ https://issues.apache.org/jira/browse/HDFS-15025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167872#comment-17167872 ] Hadoop QA commented on HDFS-15025: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 46s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} dupname {color} | {color:green} 0m 0s{color} | {color:green} No case conflicting files found. {color} | | {color:blue}0{color} | {color:blue} prototool {color} | {color:blue} 0m 0s{color} | {color:blue} prototool was not available. {color} | | {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue} 0m 0s{color} | {color:blue} markdownlint was not available. {color} | | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 16 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 3m 21s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 31m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 25s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 25m 47s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s{color} | {color:green} trunk passed {color} | | {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 3m 12s{color} | {color:blue} Used deprecated FindBugs config; considering switching to SpotBugs. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 53s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 28s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 22s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 30s{color} | {color:green} the patch passed {color} | | {color:red}-1{color} | {color:red} cc {color} | {color:red} 20m 30s{color} | {color:red} root generated 28 new + 134 unchanged - 28 fixed = 162 total (was 162) {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 30s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 3m 29s{color} | {color:orange} root: The patch generated 3 new + 725 unchanged - 4 fixed = 728 total (was 729) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 4m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s{color} | {color:green} The patch has no ill-formed XML file. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 54s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 52s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 9m 23s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 25s{color} | {color:green} hadoop-common in the patch passed. {color} | | {color:green}+1{color} | {color:green} unit {color} | {color:green} 2m 20s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} | | {color:red}-1{color} | {color:red} unit {color} | {color:red}102m 21s{color} | {color:red} hadoop-hdfs in the patch passed. {color} | | {color:
[jira] [Updated] (HDFS-15313) Ensure inodes in active filesystem are not deleted during snapshot delete
[ https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-15313: - Fix Version/s: 2.10.1 > Ensure inodes in active filesystem are not deleted during snapshot delete > - > > Key: HDFS-15313 > URL: https://issues.apache.org/jira/browse/HDFS-15313 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, > HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, > HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch > > > After HDFS-13101, it was observed in one of our customer deployments that > delete snapshot ends up cleaning up inodes from active fs which can be > referred from only one snapshot as the isLastReference() check for the parent > dir introduced in HDFS-13101 may return true in certain cases. The aim of > this Jira to add a check to ensure if the Inodes are being referred in the > active fs , should not get deleted while deletion of snapshot happens. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15313) Ensure inodes in active filesystem are not deleted during snapshot delete
[ https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17167980#comment-17167980 ] Stephen O'Donnell commented on HDFS-15313: -- Commited HDFS-15313.branch-2.10.001.patch to branch 2.10. That is all the branches taken care of, so closing this now. > Ensure inodes in active filesystem are not deleted during snapshot delete > - > > Key: HDFS-15313 > URL: https://issues.apache.org/jira/browse/HDFS-15313 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, > HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, > HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch > > > After HDFS-13101, it was observed in one of our customer deployments that > delete snapshot ends up cleaning up inodes from active fs which can be > referred from only one snapshot as the isLastReference() check for the parent > dir introduced in HDFS-13101 may return true in certain cases. The aim of > this Jira to add a check to ensure if the Inodes are being referred in the > active fs , should not get deleted while deletion of snapshot happens. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15313) Ensure inodes in active filesystem are not deleted during snapshot delete
[ https://issues.apache.org/jira/browse/HDFS-15313?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stephen O'Donnell updated HDFS-15313: - Resolution: Fixed Status: Resolved (was: Patch Available) > Ensure inodes in active filesystem are not deleted during snapshot delete > - > > Key: HDFS-15313 > URL: https://issues.apache.org/jira/browse/HDFS-15313 > Project: Hadoop HDFS > Issue Type: Bug > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.2.2, 2.10.1, 3.3.1, 3.4.0, 3.1.5 > > Attachments: HDFS-15313-branch-3.1.001.patch, HDFS-15313.000.patch, > HDFS-15313.001.patch, HDFS-15313.branch-2.10.001.patch, > HDFS-15313.branch-2.10.patch, HDFS-15313.branch-2.8.patch > > > After HDFS-13101, it was observed in one of our customer deployments that > delete snapshot ends up cleaning up inodes from active fs which can be > referred from only one snapshot as the isLastReference() check for the parent > dir introduced in HDFS-13101 may return true in certain cases. The aim of > this Jira to add a check to ensure if the Inodes are being referred in the > active fs , should not get deleted while deletion of snapshot happens. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14570) Bring back ability to totally disable webhdfs by bringing dfs.webhdfs.enabled property back into the hdfs-site.xml
[ https://issues.apache.org/jira/browse/HDFS-14570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168036#comment-17168036 ] Srinivasu Majeti commented on HDFS-14570: - Hi [~inigoiri] , Have got any custom handler available to share it here ? Hi [~weichiu] or [~arpaga] , do we have any way to disable it out of the box but enable it only for jmx alone kind of feature available ? > Bring back ability to totally disable webhdfs by bringing dfs.webhdfs.enabled > property back into the hdfs-site.xml > -- > > Key: HDFS-14570 > URL: https://issues.apache.org/jira/browse/HDFS-14570 > Project: Hadoop HDFS > Issue Type: Wish > Components: webhdfs >Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 3.0.3, 3.1.2 >Reporter: Scott A. Wehner >Priority: Major > Labels: webhdfs > Original Estimate: 6h > Remaining Estimate: 6h > > We don't want to enable security for viewing namenode http page, but we don't > want people to be able to modify the contents of hdfs through anonymous > access to the namenode page. in Hadoop 3 we lost the ability to totally > disable webhdfs. want to bring this back, doesn't seem to hard to do, but > makes it important in our environment. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15503) File and directory permissions are not able to be modified from WebUI
Hemanth Boyina created HDFS-15503: - Summary: File and directory permissions are not able to be modified from WebUI Key: HDFS-15503 URL: https://issues.apache.org/jira/browse/HDFS-15503 Project: Hadoop HDFS Issue Type: Bug Reporter: Hemanth Boyina Assignee: Hemanth Boyina After upgrading bootstrap from 3.3.7 to 3.4.1 the bootstrap popover content is not being shown in Browse File System Permission column -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15500) Add more assertions about ordered deletion of snapshot
[ https://issues.apache.org/jira/browse/HDFS-15500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tsz-wo Sze reassigned HDFS-15500: - Assignee: (was: Tsz-wo Sze) > Add more assertions about ordered deletion of snapshot > -- > > Key: HDFS-15500 > URL: https://issues.apache.org/jira/browse/HDFS-15500 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Mukul Kumar Singh >Priority: Major > > The jira proposes to add new assertions, one of the assertion to start with is > a) Add an assertion that with ordered snapshot deletion flag true, prior > snapshot in cleansubtree is null -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-12969) DfsAdmin listOpenFiles should report files by type
[ https://issues.apache.org/jira/browse/HDFS-12969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168094#comment-17168094 ] Hemanth Boyina commented on HDFS-12969: --- thanks for the comment [~tasanuma] {quote}Does this assumption always hold true {quote} very nice point , i think this assumption may not be true always > DfsAdmin listOpenFiles should report files by type > -- > > Key: HDFS-12969 > URL: https://issues.apache.org/jira/browse/HDFS-12969 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.1.0 >Reporter: Manoj Govindassamy >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-12969.001.patch, HDFS-12969.002.patch, > HDFS-12969.003.patch > > > HDFS-11847 has introduced a new option to {{-blockingDecommission}} to an > existing command > {{dfsadmin -listOpenFiles}}. But the reporting done by the command doesn't > differentiate the files based on the type (like blocking decommission). In > order to change the reporting style, the proto format used for the base > command has to be updated to carry additional fields and better be done in a > new jira outside of HDFS-11847. This jira is to track the end-to-end > enhancements needed for dfsadmin -listOpenFiles console output. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15503) File and directory permissions are not able to be modified from WebUI
[ https://issues.apache.org/jira/browse/HDFS-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HDFS-15503: -- Attachment: before-HDFS-15503.png after-HDFS-15503.png > File and directory permissions are not able to be modified from WebUI > - > > Key: HDFS-15503 > URL: https://issues.apache.org/jira/browse/HDFS-15503 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15503.001.patch, after-HDFS-15503.png, > before-HDFS-15503.png > > > After upgrading bootstrap from 3.3.7 to 3.4.1 the bootstrap popover content > is not being shown in Browse File System Permission column -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15503) File and directory permissions are not able to be modified from WebUI
[ https://issues.apache.org/jira/browse/HDFS-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Hemanth Boyina updated HDFS-15503: -- Attachment: HDFS-15503.001.patch Status: Patch Available (was: Open) > File and directory permissions are not able to be modified from WebUI > - > > Key: HDFS-15503 > URL: https://issues.apache.org/jira/browse/HDFS-15503 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15503.001.patch, after-HDFS-15503.png, > before-HDFS-15503.png > > > After upgrading bootstrap from 3.3.7 to 3.4.1 the bootstrap popover content > is not being shown in Browse File System Permission column -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15481) Ordered snapshot deletion: garbage collect deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15481: --- Fix Version/s: 3.4.0 Resolution: Fixed Status: Resolved (was: Patch Available) > Ordered snapshot deletion: garbage collect deleted snapshots > > > Key: HDFS-15481 > URL: https://issues.apache.org/jira/browse/HDFS-15481 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15481_20200723.patch, h15481_20200723b.patch > > > When the earliest snapshot is actually deleted, if the subsequent snapshots > are already marked as deleted, the subsequent snapshots can be also actually > removed from the file system. In this JIRA, we implement a mechanism to > garbage collect these snapshots. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users
[ https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168103#comment-17168103 ] Jitendra Nath Pandey edited comment on HDFS-15482 at 7/30/20, 5:37 PM: --- We will need to consider a few cases here. # Do we allow to create the snapshot of the same name once a snapshot is marked for deletion, but not actually deleted? If deleted snapshots are no longer visible, user might want to create a snapshot of the same name and get surprised if it fails. On the other hand, if we allow it, system has two snapshots of the same name. # If the snapshot is deleted and hidden, does user get to force immediate delete (if it is in order)? It makes sense to allow users to be able to delete immediately if user is following the order. But a hidden snapshot will not be accessible anymore. This gets more complicated if user creates a snapshot of the same name. was (Author: jnp): We will need to consider a few cases here. 1) Do we allow to create the snapshot of the same name once a snapshot is marked for deletion, but not actually deleted? If deleted snapshots are no longer visible, user might want to create a snapshot of the same name and get surprised if it fails. On the other hand, if we allow it, system has two snapshots of the same name. 2) If the snapshot is deleted and hidden, does user get to force immediate delete (if it is in order)? It makes sense to allow users to be able to delete immediately if user is following the order. But a hidden snapshot will not be accessible anymore. This gets more complicated if user creates a snapshot of the same name. > Ordered snapshot deletion: hide the deleted snapshots from users > > > Key: HDFS-15482 > URL: https://issues.apache.org/jira/browse/HDFS-15482 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > In HDFS-15480, the behavior of deleting the non-earliest snapshots is > changed to marking them as deleted in XAttr but not actually deleting them. > The users are still able to access the these snapshots as usual. > In this JIRA, the marked-for-deletion snapshots are hided so that they become > inaccessible > to users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15482) Ordered snapshot deletion: hide the deleted snapshots from users
[ https://issues.apache.org/jira/browse/HDFS-15482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168103#comment-17168103 ] Jitendra Nath Pandey commented on HDFS-15482: - We will need to consider a few cases here. 1) Do we allow to create the snapshot of the same name once a snapshot is marked for deletion, but not actually deleted? If deleted snapshots are no longer visible, user might want to create a snapshot of the same name and get surprised if it fails. On the other hand, if we allow it, system has two snapshots of the same name. 2) If the snapshot is deleted and hidden, does user get to force immediate delete (if it is in order)? It makes sense to allow users to be able to delete immediately if user is following the order. But a hidden snapshot will not be accessible anymore. This gets more complicated if user creates a snapshot of the same name. > Ordered snapshot deletion: hide the deleted snapshots from users > > > Key: HDFS-15482 > URL: https://issues.apache.org/jira/browse/HDFS-15482 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Shashikant Banerjee >Priority: Major > > In HDFS-15480, the behavior of deleting the non-earliest snapshots is > changed to marking them as deleted in XAttr but not actually deleting them. > The users are still able to access the these snapshots as usual. > In this JIRA, the marked-for-deletion snapshots are hided so that they become > inaccessible > to users. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15481) Ordered snapshot deletion: garbage collect deleted snapshots
[ https://issues.apache.org/jira/browse/HDFS-15481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168118#comment-17168118 ] Hudson commented on HDFS-15481: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18482 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18482/]) HDFS-15481. Ordered snapshot deletion: garbage collect deleted snapshots (github: rev 05b3337a4605dcb6904cb3fe2a58e4dc424ef015) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/common/HdfsServerConstants.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletion.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/DirectorySnapshottableFeature.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/FSImageFormatPBSnapshot.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotDeletionGc.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirectory.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/AbstractINodeDiffList.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/SnapshotManager.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestSnapshotDeletion.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSDirXAttrOp.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/snapshot/TestOrderedSnapshotDeletionGc.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/snapshot/Snapshot.java * (delete) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/TestOrderedSnapshotDeletion.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/INodeDirectory.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/DFSConfigKeys.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java > Ordered snapshot deletion: garbage collect deleted snapshots > > > Key: HDFS-15481 > URL: https://issues.apache.org/jira/browse/HDFS-15481 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Tsz-wo Sze >Assignee: Tsz-wo Sze >Priority: Major > Fix For: 3.4.0 > > Attachments: h15481_20200723.patch, h15481_20200723b.patch > > > When the earliest snapshot is actually deleted, if the subsequent snapshots > are already marked as deleted, the subsequent snapshots can be also actually > removed from the file system. In this JIRA, we implement a mechanism to > garbage collect these snapshots. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15503) File and directory permissions are not able to be modified from WebUI
[ https://issues.apache.org/jira/browse/HDFS-15503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168174#comment-17168174 ] Íñigo Goiri commented on HDFS-15503: +1 on [^HDFS-15503.001.patch]. > File and directory permissions are not able to be modified from WebUI > - > > Key: HDFS-15503 > URL: https://issues.apache.org/jira/browse/HDFS-15503 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Hemanth Boyina >Assignee: Hemanth Boyina >Priority: Major > Attachments: HDFS-15503.001.patch, after-HDFS-15503.png, > before-HDFS-15503.png > > > After upgrading bootstrap from 3.3.7 to 3.4.1 the bootstrap popover content > is not being shown in Browse File System Permission column -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14570) Bring back ability to totally disable webhdfs by bringing dfs.webhdfs.enabled property back into the hdfs-site.xml
[ https://issues.apache.org/jira/browse/HDFS-14570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168173#comment-17168173 ] Íñigo Goiri commented on HDFS-14570: [~smajeti], we didn't go all the way there but we had HADOOP-16680. > Bring back ability to totally disable webhdfs by bringing dfs.webhdfs.enabled > property back into the hdfs-site.xml > -- > > Key: HDFS-14570 > URL: https://issues.apache.org/jira/browse/HDFS-14570 > Project: Hadoop HDFS > Issue Type: Wish > Components: webhdfs >Affects Versions: 3.0.0, 3.1.0, 3.0.1, 3.0.2, 3.2.0, 3.1.1, 3.0.3, 3.1.2 >Reporter: Scott A. Wehner >Priority: Major > Labels: webhdfs > Original Estimate: 6h > Remaining Estimate: 6h > > We don't want to enable security for viewing namenode http page, but we don't > want people to be able to modify the contents of hdfs through anonymous > access to the namenode page. in Hadoop 3 we lost the ability to totally > disable webhdfs. want to bring this back, doesn't seem to hard to do, but > makes it important in our environment. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15504) Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE
xuzq created HDFS-15504: --- Summary: Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE Key: HDFS-15504 URL: https://issues.apache.org/jira/browse/HDFS-15504 Project: Hadoop HDFS Issue Type: Bug Reporter: xuzq Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE when _*dfs.ha.tail-edits.in-progress=true*_. The Code like below and throw IOException at *_checkForGaps_* when missed edits more than _*dfs.ha.tail-edits.qjm.rpc.max-txns*_ {code:java} public Collection selectInputStreams(long fromTxId, long toAtLeastTxId, MetaRecoveryContext recovery, boolean inProgressOk, boolean onlyDurableTxns) throws IOException { List streams = new ArrayList(); synchronized(journalSetLock) { Preconditions.checkState(journalSet.isOpen(), "Cannot call " + "selectInputStreams() on closed FSEditLog"); selectInputStreams(streams, fromTxId, inProgressOk, onlyDurableTxns); } try { checkForGaps(streams, fromTxId, toAtLeastTxId, inProgressOk); } catch (IOException e) { if (recovery != null) { // If recovery mode is enabled, continue loading even if we know we // can't load up to toAtLeastTxId. LOG.error("Exception while selecting input streams", e); } else { closeAllStreams(streams); throw e; } } return streams; }{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15493) Update block map and name cache in parallel while loading fsimage.
[ https://issues.apache.org/jira/browse/HDFS-15493?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17168347#comment-17168347 ] Chengwei Wang commented on HDFS-15493: -- {quote}Therefore setting it to 500 or 1000ms and logging a message each time around the loop should not give any time penalty, but should give us some information about what is happening. {quote} Yes, you are exactly right! The more waiting time and logging would be useful, I would add these. {quote}How long does the shutdown take with the single 4 thread executor? {quote} I just assmued the waiting time was the time cost from `completed loading all INodeDirectory sub-sections` to loading fsimage finished. {code:java} 20/07/31 10:25:59 INFO namenode.FSImageFormatPBINode: Completed loading all INodeDirectory sub-sections 20/07/31 10:26:22 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 431 seconds. {code} {quote}Are you testing this on the trunk code + this patch, or a different version plus this patch? {quote} I tested this patch on our dev branch which was based on CDH5.10.0 with many patches, the version should be 2.6.0~2.8.0. {quote}Could you try testing 2 executors with 2 threads each? {quote} I had tested this after tested two single thread executors, the time cost was betweent 420s and 430s. I will submit 3 new patches: # one executor with 4 threads with waiting time logging # two single thread executor with waiting time logging and without lock # two fixed 2 thread executors with lock and waiting time logging Let's we test which one would preform best. > Update block map and name cache in parallel while loading fsimage. > -- > > Key: HDFS-15493 > URL: https://issues.apache.org/jira/browse/HDFS-15493 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Chengwei Wang >Priority: Major > Attachments: HDFS-15493.001.patch, fsimage-loading.log > > > While loading INodeDirectorySection of fsimage, it will update name cache and > block map after added inode file to inode directory. It would reduce time > cost of fsimage loading to enable these steps run in parallel. > In our test case, with patch HDFS-13694 and HDFS-14617, the time cost to load > fsimage (220M files & 240M blocks) is 470s, with this patch , the time cost > reduc to 410s. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15498) Show snapshots deletion status in snapList cmd
[ https://issues.apache.org/jira/browse/HDFS-15498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shashikant Banerjee updated HDFS-15498: --- Status: Patch Available (was: Open) > Show snapshots deletion status in snapList cmd > -- > > Key: HDFS-15498 > URL: https://issues.apache.org/jira/browse/HDFS-15498 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: snapshots >Reporter: Shashikant Banerjee >Assignee: Shashikant Banerjee >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15498.000.patch > > > HDFS-15488 adds a cmd to list all snapshots for a given snapshottable > directory. A snapshot can be just marked as deleted with ordered deletion > config set. This Jira aims to add deletion status to cmd output. > > SAMPLE OUTPUT: > {noformat} > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshottableDir > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 2 65536 /user > sbanerjee-MBP15:hadoop-3.4.0-SNAPSHOT sbanerjee$ bin/hdfs lsSnapshot /user > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:52 1 ACTIVE > /user/.snapshot/s1 > drwxr-xr-x 0 sbanerjee supergroup 0 2020-07-27 11:51 0 DELETED > /user/.snapshot/s20200727-115156.407{noformat} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15504) Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE
[ https://issues.apache.org/jira/browse/HDFS-15504?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] xuzq updated HDFS-15504: Attachment: HDFS-15504-001.patch Status: Patch Available (was: Open) > Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE > - > > Key: HDFS-15504 > URL: https://issues.apache.org/jira/browse/HDFS-15504 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: xuzq >Priority: Major > Attachments: HDFS-15504-001.patch > > > Bootstrap failed and return ERR_CODE_LOGS_UNAVAILABLE when > _*dfs.ha.tail-edits.in-progress=true*_. > > The Code like below and throw IOException at *_checkForGaps_* when missed > edits more than > _*dfs.ha.tail-edits.qjm.rpc.max-txns*_ > {code:java} > public Collection selectInputStreams(long fromTxId, > long toAtLeastTxId, MetaRecoveryContext recovery, boolean inProgressOk, > boolean onlyDurableTxns) throws IOException { > List streams = new ArrayList(); > synchronized(journalSetLock) { > Preconditions.checkState(journalSet.isOpen(), "Cannot call " + > "selectInputStreams() on closed FSEditLog"); > selectInputStreams(streams, fromTxId, inProgressOk, onlyDurableTxns); > } > try { > checkForGaps(streams, fromTxId, toAtLeastTxId, inProgressOk); > } catch (IOException e) { > if (recovery != null) { > // If recovery mode is enabled, continue loading even if we know we > // can't load up to toAtLeastTxId. > LOG.error("Exception while selecting input streams", e); > } else { > closeAllStreams(streams); > throw e; > } > } > return streams; > }{code} -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org