[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039720#comment-17039720 ] Siddharth Wagle commented on HDFS-15154: Looking into the test failure will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 5:59 AM: - Looking into the test failures, will update the patch shortly. was (Author: swagle): Looking into the test failure will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 6:03 AM: - Looking into the test failures, most of them seem to fail due to OOM, retriggering. was (Author: swagle): Looking into the test failures, will update the patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17039720#comment-17039720 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 8:29 AM: - Looking into the test failures, most of them seem to fail due to OOM, retriggering. 04 => checkstyle fixes. was (Author: swagle): Looking into the test failures, most of them seem to fail due to OOM, retriggering. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.04.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17040323#comment-17040323 ] Siddharth Wagle commented on HDFS-15154: I realized that the deprecated key is now ignored, trying to figure out a clean way to hand deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17040323#comment-17040323 ] Siddharth Wagle edited comment on HDFS-15154 at 2/19/20 6:38 PM: - I realized that the deprecated key is now ignored, trying to figure out a clean way to handle deprecation. was (Author: swagle): I realized that the deprecated key is now ignored, trying to figure out a clean way to hand deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.05.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042127#comment-17042127 ] Siddharth Wagle commented on HDFS-15154: Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042127#comment-17042127 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 7:33 PM: - Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, and the new key is not in the configs. HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. was (Author: swagle): Hi [~ayushtkn], thanks for the suggestion, I was just about to upload a new version :-) So, Configuration has a DeprecationContext and it handles deprecation by making sure that new config key gets the deprecated config's value after the Deprecation is added to the context, HdfsConfiguration does this by statically loading up a bunch of keys into the DeprecationContext, we also get a log warning, etc as a result of this. I wanted to make use of this in the new patch. What I would have liked to do is actually override valueOf in the _enum_ to make this even more cleaner and readable but unfortunately Java does not allow it. In the new patch I am verifying that deprecated key work correctly as well, do let me know what you think, I wanted to avoid special handling at the call site for the deprecation. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042158#comment-17042158 ] Siddharth Wagle commented on HDFS-15154: I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042158#comment-17042158 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 8:37 PM: - I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. was (Author: swagle): I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042158#comment-17042158 ] Siddharth Wagle edited comment on HDFS-15154 at 2/21/20 9:10 PM: - I did not find other examples of config *type* changing, the deprecation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. was (Author: swagle): I did not find other examples of config *type* changing, the depreciation handling does work cleanly only if the type remains the same. Do you think we should go back to 2 booleans? [~arp] comment? Else just handle both the configs everywhere and respect new if exists over old? Ugly but no better option. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042178#comment-17042178 ] Siddharth Wagle commented on HDFS-15154: Actually I will make the change to make sure we don't break compat. Updating patch shortly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.06.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042192#comment-17042192 ] Siddharth Wagle commented on HDFS-15154: 06 => Instead of DeprecationContext handled the deprecation in the DFSUtil call to get settings. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17042340#comment-17042340 ] Siddharth Wagle commented on HDFS-15154: 07 => 06 + checkstyle fixed. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.07.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17045964#comment-17045964 ] Siddharth Wagle commented on HDFS-15154: Hi [~ayushtkn]/[~arp], what do you think about changes in the latest patch? Any concerns? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.08.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047005#comment-17047005 ] Siddharth Wagle commented on HDFS-15154: 08 => Explicity verified the deprecated config still takes effect in the absence of new config. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17047016#comment-17047016 ] Siddharth Wagle commented on HDFS-15154: 09 => rebased 08. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.09.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057494#comment-17057494 ] Siddharth Wagle commented on HDFS-15154: Thanks for the review [~hanishakoneru], I will make those changes > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057501#comment-17057501 ] Siddharth Wagle commented on HDFS-15154: Since we are logging deprecation already, can we just change the warning to this: {noformat} Failed to change storage policy satisfier as storage policies have been disabled. {noformat} Rather than the cryptic message? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.10.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17057597#comment-17057597 ] Siddharth Wagle commented on HDFS-15154: 10 => Updated the patch with changes suggested by [~hanishakoneru], changed the exception message to be simpler since we already print deprecation warning and have updated the hdfs-default.xml > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058073#comment-17058073 ] Siddharth Wagle commented on HDFS-15154: 11 => 10 + Fixed checkstyle warning and UT failing due to exception text. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.11.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17058203#comment-17058203 ] Siddharth Wagle commented on HDFS-15154: Test failures are unrelated. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.12.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059443#comment-17059443 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for the suggestion. Changes in version 12: - Moved all checks to FSNameSystem before writeLock is taken for set, unset and satisfy - Removed the config check from FSDirectory so config loaded only once - Removed check from package-private method(s) in FSDirAttrOp since FSNameSystem is only caller > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: (was: HDFS-15154.12.patch) > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.12.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059478#comment-17059478 ] Siddharth Wagle commented on HDFS-15154: Rebased and re-uploaded v12 > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059764#comment-17059764 ] Siddharth Wagle commented on HDFS-15154: [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Agree with other refactor suggestion, will update the patch accordingly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059764#comment-17059764 ] Siddharth Wagle edited comment on HDFS-15154 at 3/15/20, 6:49 PM: -- [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Note: We cannot use deprecated context here which replaces new config with old value because of type change, otherwise things would have worked without change. Agree with other refactor suggestion, will update the patch accordingly. was (Author: swagle): [~ayushtkn] The problem with tests is that hdfs-default has both deprecated config and new one, so the new one is respected if nothing is modified for those tests and the default for the new config trumps the deprecated one. That is why I explicitly added a test that tests the deprecated config works when the new one is not present. Agree with other refactor suggestion, will update the patch accordingly. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17059775#comment-17059775 ] Siddharth Wagle commented on HDFS-15154: So, old configuration will work if the new configuration is not present. This situation arises when both old and new are present, new one takes default value and the old one is ignored. In a few tests that explicitly disable the policy, this results in failure unless we delete the new config as we do in TestStoragePolicyPermissionSettings#testStoragePolicyConfigDeprecation > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060600#comment-17060600 ] Siddharth Wagle commented on HDFS-15154: We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, It any comment [~arp], since the change to enum was a suggestion from you? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060600#comment-17060600 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 3:50 AM: -- We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, any comment [~arp], since the change to enum was a suggestion from you? was (Author: swagle): We could fallback to not have the deprecation and adding a new boolean-valued config which indicates whether superuser only, It any comment [~arp], since the change to enum was a suggestion from you? > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.13.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: (was: HDFS-15154.13.patch) > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.13.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060685#comment-17060685 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch, I was expecting this to be a straightforward change but ended up figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:34 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch, I was expecting this to be a straightforward change but ended up figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:35 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. in version 13, I went back several versions in my patch and actually re-did it to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17060685#comment-17060685 ] Siddharth Wagle edited comment on HDFS-15154 at 3/17/20, 7:36 AM: -- Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try { } block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. was (Author: swagle): Thanks [~ayushtkn] and [~hanishakoneru] for reviewing this patch. I was expecting this to be a straightforward change but ended up trying to figure out how to handle deprecation properly with a config type change and I think there is no real need to do the type change vs adding a simple flag. In version 13, I went back several versions in my patch and actually re-did the patch to add a flag and simply keep all of the existing tests the same. Few points: - the _checkSuperuserPrivilege_ already logs audit event so did not move the check to try {...} block - the _checkStoragePolicyEnabled_ gets title case vs camel, otherwise, I would have to modify existing tests to take camel case in the exception text which looks ugly anyway - moved the check to FSNameSystem like before and removed the late check from FSDirAttrOp Hopefully this is close to final version. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.14.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17064058#comment-17064058 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for the review. 14 => incorporated all the above comments from [~ayushtkn]. One additional change: Used CaseUtils.toCamelCase(), unfortunately, there is no reverse of this which would have been ideal. More readable, I think but I can change this back to defining string literals if reviewers feel otherwise. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-15154: --- Attachment: HDFS-15154.15.patch > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17066391#comment-17066391 ] Siddharth Wagle commented on HDFS-15154: Thanks [~arp] for your feedback. [~ayushtkn] I have updated version 15 with the minor change to replace most of the CaseUtils calls with string literal for operation name. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15154) Allow only hdfs superusers the ability to assign HDFS storage policies
[ https://issues.apache.org/jira/browse/HDFS-15154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17066889#comment-17066889 ] Siddharth Wagle commented on HDFS-15154: Thanks [~ayushtkn] for actually going through every iteration of this, much appreciated. Can you commit this for me? Thanks. > Allow only hdfs superusers the ability to assign HDFS storage policies > -- > > Key: HDFS-15154 > URL: https://issues.apache.org/jira/browse/HDFS-15154 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.0.0 >Reporter: Bob Cauthen >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-15154.01.patch, HDFS-15154.02.patch, > HDFS-15154.03.patch, HDFS-15154.04.patch, HDFS-15154.05.patch, > HDFS-15154.06.patch, HDFS-15154.07.patch, HDFS-15154.08.patch, > HDFS-15154.09.patch, HDFS-15154.10.patch, HDFS-15154.11.patch, > HDFS-15154.12.patch, HDFS-15154.13.patch, HDFS-15154.14.patch, > HDFS-15154.15.patch > > > Please provide a way to limit only HDFS superusers the ability to assign HDFS > Storage Policies to HDFS directories. > Currently, and based on Jira HDFS-7093, all storage policies can be disabled > cluster wide by setting the following: > dfs.storage.policy.enabled to false > But we need a way to allow only HDFS superusers the ability to assign an HDFS > Storage Policy to an HDFS directory. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1953) Remove pipeline persistent in SCM
[ https://issues.apache.org/jira/browse/HDDS-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905406#comment-16905406 ] Siddharth Wagle commented on HDDS-1953: --- How can we determine if pipeline is lost? I did bring this point up in the design doc for multi-raft. PipelineManager can instead listen in on heartbeats and react? > Remove pipeline persistent in SCM > - > > Key: HDDS-1953 > URL: https://issues.apache.org/jira/browse/HDDS-1953 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Sammi Chen >Assignee: Sammi Chen >Priority: Major > > Currently, SCM will persistent pipeline in metastore with datanode > information locally. After SCM restart, it will reload all the pipelines from > the metastore. If there is any datanode information change during the whole > SCMc lifecycle, the persisted pipeline is not updated. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1953) Remove pipeline persistent in SCM
[ https://issues.apache.org/jira/browse/HDDS-1953?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905406#comment-16905406 ] Siddharth Wagle edited comment on HDDS-1953 at 8/12/19 5:26 PM: How can we determine if pipeline is lost? I did bring this point up in the design doc for multi-raft: HDDS-1564 PipelineManager can instead listen in on heartbeats and react? was (Author: swagle): How can we determine if pipeline is lost? I did bring this point up in the design doc for multi-raft. PipelineManager can instead listen in on heartbeats and react? > Remove pipeline persistent in SCM > - > > Key: HDDS-1953 > URL: https://issues.apache.org/jira/browse/HDDS-1953 > Project: Hadoop Distributed Data Store > Issue Type: Improvement >Reporter: Sammi Chen >Assignee: Sammi Chen >Priority: Major > > Currently, SCM will persistent pipeline in metastore with datanode > information locally. After SCM restart, it will reload all the pipelines from > the metastore. If there is any datanode information change during the whole > SCMc lifecycle, the persisted pipeline is not updated. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.06.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16905641#comment-16905641 ] Siddharth Wagle commented on HDFS-2470: --- Addressed the comments from [~eyang] in 06, except the setting permissions on the root directory. Need some insight on what should the right fix be! Setting the permissions on just */tmp/namenode/current* and not on */tmp/namenode*, does not make sense to me, but consequently, if someone sets the _dfs.namenode.name.dir_ to "/tmp" we would end up doing the wrong thing. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906895#comment-16906895 ] Siddharth Wagle commented on HDFS-2470: --- Hi [~eyang] thanks for catching the oversight, fixing #1 from your comment. Not sure about #2, why can we not fall back to a fixed default of 700? > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16906895#comment-16906895 ] Siddharth Wagle edited comment on HDFS-2470 at 8/14/19 6:07 AM: Hi [~eyang] thanks for catching the oversight, fixing #1 from your comment. Not sure about #2, why can we not fall back to a fixed default of 700? I left the null for all the constructors used by the unit tests which do not send any permissions down. was (Author: swagle): Hi [~eyang] thanks for catching the oversight, fixing #1 from your comment. Not sure about #2, why can we not fall back to a fixed default of 700? > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, HDFS-2470.06.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.07.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16908817#comment-16908817 ] Siddharth Wagle commented on HDFS-2470: --- - Agree with point #1, will ignore root directory permissions, [~arp] you ok with that since you +1ed earlier. - Regarding point #2, I did not want the generic StorageDirectory class to define a default, hence defined the default for NN and JN separately, and since unit tests don't care about setting permissions, went with a null check to skip permission setting instead. But, do you mean we should change all unit-test call sites to pass a default 700 permission? That would be ok I guess. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16909144#comment-16909144 ] Siddharth Wagle commented on HDFS-2470: --- On second thought, HBase still relies on short circuit reads AFAIK, so setting default permissions would be dangerous, at the least we would need to default to "750". The current patch however with nulls doesn't have side effects for DN storage. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.08.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911687#comment-16911687 ] Siddharth Wagle commented on HDFS-2470: --- In 08 removed the root dir permission setting. Regarding setting the default for StorageDirectory, I am not sure we should do that since it might have a chance of breaking some dependency somewhere else, as stated earlier with shortcircuit reads. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1873) Recon should store last successful run timestamp for each task
[ https://issues.apache.org/jira/browse/HDDS-1873?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1873: - Assignee: Siddharth Wagle > Recon should store last successful run timestamp for each task > -- > > Key: HDDS-1873 > URL: https://issues.apache.org/jira/browse/HDDS-1873 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Recon >Affects Versions: 0.4.1 >Reporter: Vivek Ratnavel Subramanian >Assignee: Siddharth Wagle >Priority: Major > > Recon should store last ozone manager snapshot received timestamp along with > timestamps of last successful run for each task. > This is important to give users a sense of how latest the current data that > they are looking at is. And, we need this per task because some tasks might > fail to run or might take much longer time to run than other tasks and this > needs to be reflected in the UI for better and consistent user experience. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1968) Add an endpoint in SCM to publish unhealthy/missing containers.
[ https://issues.apache.org/jira/browse/HDDS-1968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1968: - Assignee: Aravindan Vijayan Will be moving this to v2 since it involves: - Ability to take rocksDB snapshot of SCM - API to service snapshots to Recon over rpc/http - Persist container state in the SCM rocksDB > Add an endpoint in SCM to publish unhealthy/missing containers. > --- > > Key: HDDS-1968 > URL: https://issues.apache.org/jira/browse/HDDS-1968 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Reporter: Aravindan Vijayan >Assignee: Aravindan Vijayan >Priority: Major > Fix For: 0.5.0 > > -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1084) Ozone Recon Service v1
[ https://issues.apache.org/jira/browse/HDDS-1084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911783#comment-16911783 ] Siddharth Wagle commented on HDDS-1084: --- The Recon v1 has the Container -> Key browser, cluster growth, and small files queries in addition to the UI framework to server future queries. Will be moving the out of scope tasks to v2. (FSCK: needs unhealthy containers detected which will be part of next release) cc: [~anu] > Ozone Recon Service v1 > -- > > Key: HDDS-1084 > URL: https://issues.apache.org/jira/browse/HDDS-1084 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Recon >Affects Versions: 0.4.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: Ozone_Recon_Design_V1_Draft.pdf > > > Recon Server at a high level will maintain a global view of Ozone that is not > available from SCM or OM. Things like how many volumes exist; and how many > buckets exist per volume; which volume has maximum buckets; which are buckets > that have not been accessed for a year, which are the corrupt blocks, which > are blocks on data nodes which are not used; and answer similar queries. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911932#comment-16911932 ] Siddharth Wagle commented on HDFS-2470: --- Unit test failure is related since we checked root instead of the current dir, making the change. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-1996) Ozone Recon Service v1.1
Siddharth Wagle created HDDS-1996: - Summary: Ozone Recon Service v1.1 Key: HDDS-1996 URL: https://issues.apache.org/jira/browse/HDDS-1996 Project: Hadoop Distributed Data Store Issue Type: New Feature Components: Ozone Recon Affects Versions: 0.5.0 Reporter: Siddharth Wagle Assignee: Siddharth Wagle Placeholder for all the tasks covered in the design doc for HDDS-1084 but did not make it into 0.5.0 release and targeted towards the next release of Ozone after 0.5.0. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1996) Ozone Recon Service v1.1
[ https://issues.apache.org/jira/browse/HDDS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1996: -- Description: Placeholder for all the tasks covered in the design doc for HDDS-1084 but did not make it into 0.5.0 release and targeted towards the next release of Ozone after 0.5.0. An updated design document will be uploaded to this Jira. was:Placeholder for all the tasks covered in the design doc for HDDS-1084 but did not make it into 0.5.0 release and targeted towards the next release of Ozone after 0.5.0. > Ozone Recon Service v1.1 > > > Key: HDDS-1996 > URL: https://issues.apache.org/jira/browse/HDDS-1996 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Recon >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Placeholder for all the tasks covered in the design doc for HDDS-1084 but did > not make it into 0.5.0 release and targeted towards the next release of Ozone > after 0.5.0. > An updated design document will be uploaded to this Jira. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1996) Ozone Recon Service v1.1
[ https://issues.apache.org/jira/browse/HDDS-1996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1996: - Assignee: Aravindan Vijayan (was: Siddharth Wagle) > Ozone Recon Service v1.1 > > > Key: HDDS-1996 > URL: https://issues.apache.org/jira/browse/HDDS-1996 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Recon >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Aravindan Vijayan >Priority: Major > > Placeholder for all the tasks covered in the design doc for HDDS-1084 but did > not make it into 0.5.0 release and targeted towards the next release of Ozone > after 0.5.0. > An updated design document will be uploaded to this Jira. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16911932#comment-16911932 ] Siddharth Wagle edited comment on HDFS-2470 at 8/21/19 4:41 AM: Unit test failure is related since we checked root instead of the current dir. Re-uploaded version 08 with the unit test fix. was (Author: swagle): Unit test failure is related since we checked root instead of the current dir, making the change. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.08.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: (was: HDFS-2470.08.patch) > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912551#comment-16912551 ] Siddharth Wagle edited comment on HDFS-2470 at 8/21/19 5:51 PM: 09 => unit test and checkstyle fixed. was (Author: swagle): 09 => unit test fixed. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.09.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16912551#comment-16912551 ] Siddharth Wagle commented on HDFS-2470: --- 09 => unit test fixed. > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: HDFS-2470.09.patch > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDFS-2470: -- Attachment: (was: HDFS-2470.09.patch) > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1574) ensure same datanodes are not a part of multiple pipelines
[ https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1574: - Assignee: (was: Siddharth Wagle) > ensure same datanodes are not a part of multiple pipelines > -- > > Key: HDDS-1574 > URL: https://issues.apache.org/jira/browse/HDDS-1574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Priority: Major > > Details in design doc. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1572) Implement a Pipeline scrubber to maintain healthy number of pipelines in a cluster
[ https://issues.apache.org/jira/browse/HDDS-1572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1572: - Assignee: (was: Siddharth Wagle) > Implement a Pipeline scrubber to maintain healthy number of pipelines in a > cluster > -- > > Key: HDDS-1572 > URL: https://issues.apache.org/jira/browse/HDDS-1572 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Priority: Major > > The design document talks about initial requirements for the pipeline > scrubber. > - Maintain a datastructure for datanodes violating the pipeline membership > soft upper bound. > - Scan the pipelines that the nodes are a part of to select candidates for > teardown. > - Scan pipelines that do not have open containers currently in use and > datanodes are in violation. > - Schedule tear down operation if a candidate pipeline is found. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines
[ https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1574: -- Summary: Ensure same datanodes are not a part of multiple pipelines (was: ensure same datanodes are not a part of multiple pipelines) > Ensure same datanodes are not a part of multiple pipelines > -- > > Key: HDDS-1574 > URL: https://issues.apache.org/jira/browse/HDDS-1574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Priority: Major > > Details in design doc. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1570) Refactor heartbeat reports to report all the pipelines that are open
[ https://issues.apache.org/jira/browse/HDDS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1570: - Assignee: (was: Siddharth Wagle) > Refactor heartbeat reports to report all the pipelines that are open > > > Key: HDDS-1570 > URL: https://issues.apache.org/jira/browse/HDDS-1570 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Datanode >Reporter: Siddharth Wagle >Priority: Major > > Presently the pipeline report only reports a single pipeline id. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1574) Ensure same datanodes are not a part of multiple pipelines
[ https://issues.apache.org/jira/browse/HDDS-1574?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1574: - Assignee: Siddharth Wagle > Ensure same datanodes are not a part of multiple pipelines > -- > > Key: HDDS-1574 > URL: https://issues.apache.org/jira/browse/HDDS-1574 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Details in design doc. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1570) Refactor heartbeat reports to report all the pipelines that are open
[ https://issues.apache.org/jira/browse/HDDS-1570?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1570: - Assignee: Siddharth Wagle > Refactor heartbeat reports to report all the pipelines that are open > > > Key: HDDS-1570 > URL: https://issues.apache.org/jira/browse/HDDS-1570 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: Ozone Datanode >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > > Presently the pipeline report only reports a single pipeline id. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complet
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16916155#comment-16916155 ] Siddharth Wagle commented on HDDS-1868: --- This needs some change in Ratis contract as well, right? I don't see a clear way of asking a RaftPeer whether there is a leader in XceiverServerRatis. > Ozone pipelines should be marked as ready only after the leader election is > complet > --- > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Priority: Major > Fix For: 0.5.0 > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complet
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-1868: - Assignee: Siddharth Wagle > Ozone pipelines should be marked as ready only after the leader election is > complet > --- > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1868: -- Summary: Ozone pipelines should be marked as ready only after the leader election is complete (was: Ozone pipelines should be marked as ready only after the leader election is complet) > Ozone pipelines should be marked as ready only after the leader election is > complete > > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-2470) NN should automatically set permissions on dfs.namenode.*.dir
[ https://issues.apache.org/jira/browse/HDFS-2470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16917096#comment-16917096 ] Siddharth Wagle commented on HDFS-2470: --- Thanks for following up [~eyang], appreciate it! > NN should automatically set permissions on dfs.namenode.*.dir > - > > Key: HDFS-2470 > URL: https://issues.apache.org/jira/browse/HDFS-2470 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.0.0-alpha >Reporter: Aaron T. Myers >Assignee: Siddharth Wagle >Priority: Major > Fix For: 3.3.0, 3.2.1 > > Attachments: HDFS-2470.01.patch, HDFS-2470.02.patch, > HDFS-2470.03.patch, HDFS-2470.04.patch, HDFS-2470.05.patch, > HDFS-2470.06.patch, HDFS-2470.07.patch, HDFS-2470.08.patch, HDFS-2470.09.patch > > > Much as the DN currently sets the correct permissions for the > dfs.datanode.data.dir, the NN should do the same for the > dfs.namenode.(name|edit).dir. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDDS-2050) Error while compiling ozone-recon-web
[ https://issues.apache.org/jira/browse/HDDS-2050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle reassigned HDDS-2050: - Assignee: Vivek Ratnavel Subramanian This is probably an intermittent failure since I did not hit this as recently as yesterday. > Error while compiling ozone-recon-web > - > > Key: HDDS-2050 > URL: https://issues.apache.org/jira/browse/HDDS-2050 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Recon >Reporter: Nanda kumar >Assignee: Vivek Ratnavel Subramanian >Priority: Major > > The following error is seen while compiling {{ozone-recon-web}} > {noformat} > [INFO] Running 'yarn install' in > /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web > [INFO] yarn install v1.9.2 > [INFO] [1/4] Resolving packages... > [INFO] [2/4] Fetching packages... > [ERROR] (node:31190) [DEP0005] DeprecationWarning: Buffer() is deprecated due > to security and usability issues. Please use the Buffer.alloc(), > Buffer.allocUnsafe(), or Buffer.from() methods instead. > [INFO] [3/4] Linking dependencies... > [ERROR] warning " > less-loader@5.0.0" has unmet peer dependency > "webpack@^2.0.0 || ^3.0.0 || ^4.0.0". > [INFO] [4/4] Building fresh packages... > [ERROR] warning Error running install script for optional dependency: > "/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents: > Command failed. > [ERROR] Exit code: 1 > [ERROR] Command: node install > [ERROR] Arguments: > [ERROR] Directory: > /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents > [ERROR] Output: > [ERROR] node-pre-gyp info it worked if it ends with ok > [INFO] info This module is OPTIONAL, you can safely ignore this error > [ERROR] node-pre-gyp info using node-pre-gyp@0.12.0 > [ERROR] node-pre-gyp info using node@12.1.0 | darwin | x64 > [ERROR] node-pre-gyp WARN Using request for node-pre-gyp https download > [ERROR] node-pre-gyp info check checked for > \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/lib/binding/Release/node-v72-darwin-x64/fse.node\" > (not found) > [ERROR] node-pre-gyp http GET > https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz > [ERROR] node-pre-gyp http 404 > https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz > [ERROR] node-pre-gyp WARN Tried to download(404): > https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz > [ERROR] node-pre-gyp WARN Pre-built binaries not found for fsevents@1.2.8 and > node@12.1.0 (node-v72 ABI, unknown) (falling back to source compile with > node-gyp) > [ERROR] node-pre-gyp http 404 status code downloading tarball > https://fsevents-binaries.s3-us-west-2.amazonaws.com/v1.2.8/fse-v1.2.8-node-v72-darwin-x64.tar.gz > [ERROR] node-pre-gyp ERR! build error > [ERROR] node-pre-gyp ERR! stack Error: Failed to execute 'node-gyp clean' > (Error: spawn node-gyp ENOENT) > [ERROR] node-pre-gyp ERR! stack at ChildProcess. > (/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/lib/util/compile.js:77:29) > [ERROR] node-pre-gyp ERR! stack at ChildProcess.emit (events.js:196:13) > [ERROR] node-pre-gyp ERR! stack at Process.ChildProcess._handle.onexit > (internal/child_process.js:254:12) > [ERROR] node-pre-gyp ERR! stack at onErrorNT > (internal/child_process.js:431:16) > [ERROR] node-pre-gyp ERR! stack at processTicksAndRejections > (internal/process/task_queues.js:84:17) > [ERROR] node-pre-gyp ERR! System Darwin 18.5.0 > [ERROR] node-pre-gyp ERR! command > \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/target/node/node\" > > \"/Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents/node_modules/node-pre-gyp/bin/node-pre-gyp\" > \"install\" \"--fallback-to-build\" > [ERROR] node-pre-gyp ERR! cwd > /Users/nvadivelu/codebase/apache/hadoop/hadoop-ozone/ozone-recon/src/main/resources/webapps/recon/ozone-recon-web/node_modules/fsevents > [ERROR] node-pre-gyp ERR! node -v v12.1.0 > [ERROR] node-pre-gyp ERR! node-pre-gyp -v v0.12.0 > [ERROR] node-pre-gyp ERR! not ok > [ERROR] Failed to execute 'node-gyp clean' (Error: spawn node-gyp ENOENT)" > [INFO] Done in 102.54s. > {noformat} -- This message was sent by Atlassian Jira (v8.3.2#803003) -
[jira] [Updated] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1868: -- Attachment: HDDS-1868.01.patch > Ozone pipelines should be marked as ready only after the leader election is > complete > > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1868.01.patch > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16919010#comment-16919010 ] Siddharth Wagle commented on HDDS-1868: --- [~msingh] Attaching a preliminary patch to get feedback will do a pull request thereafter with UT. > Ozone pipelines should be marked as ready only after the leader election is > complete > > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1868.01.patch > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDDS-1868) Ozone pipelines should be marked as ready only after the leader election is complete
[ https://issues.apache.org/jira/browse/HDDS-1868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDDS-1868 started by Siddharth Wagle. - > Ozone pipelines should be marked as ready only after the leader election is > complete > > > Key: HDDS-1868 > URL: https://issues.apache.org/jira/browse/HDDS-1868 > Project: Hadoop Distributed Data Store > Issue Type: Bug > Components: Ozone Datanode, SCM >Affects Versions: 0.4.0 >Reporter: Mukul Kumar Singh >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1868.01.patch > > > Ozone pipeline on restart start in allocated state, they are moved into open > state after all the pipeline have reported to it. However this potentially > can lead into an issue where the pipeline is still not ready to accept any > incoming IO operations. > The pipelines should be marked as ready only after the leader election is > complete and leader is ready to accept incoming IO. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Work started] (HDDS-1564) Ozone multi-raft support
[ https://issues.apache.org/jira/browse/HDDS-1564?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Work on HDDS-1564 started by Siddharth Wagle. - > Ozone multi-raft support > > > Key: HDDS-1564 > URL: https://issues.apache.org/jira/browse/HDDS-1564 > Project: Hadoop Distributed Data Store > Issue Type: New Feature > Components: Ozone Datanode, SCM >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Attachments: Ozone Multi-Raft Support.pdf > > > Apache Ratis supports multi-raft by allowing the same node to be a part of > multiple raft groups. The proposal is to allow datanodes to be a part of > multiple raft groups. The attached design doc explains the reasons for doing > this as well a few initial design decisions. > Some of the work in this feature also related to HDDS-700 which implements > rack-aware container placement for closed containers. -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode
[ https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587 ] Siddharth Wagle edited comment on HDDS-1569 at 9/3/19 5:27 PM: --- [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should through the Background pipeline creator job. We should not create any pipelines on client requests, in fact we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. was (Author: swagle): [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc and get back if any other data structures that need an update. The only way new pipelines are created should through the Background pipeline creator job. We should not create any pipelines on client requests, in fact we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. > Add ability to SCM for creating multiple pipelines with same datanode > - > > Key: HDDS-1569 > URL: https://issues.apache.org/jira/browse/HDDS-1569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines > with datanodes that are not a part of sufficient pipelines > - Define soft and hard upper bounds for pipeline membership > - Create SCMAllocationManager that can be leveraged to get a candidate set of > datanodes based on placement policies > - Add the datanodes to internal datastructures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode
[ https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587 ] Siddharth Wagle commented on HDDS-1569: --- [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc and get back if any other data structures that need an update. The only way new pipelines are created should through the Background pipeline creator job. We should not create any pipelines on client requests, in fact we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. > Add ability to SCM for creating multiple pipelines with same datanode > - > > Key: HDDS-1569 > URL: https://issues.apache.org/jira/browse/HDDS-1569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines > with datanodes that are not a part of sufficient pipelines > - Define soft and hard upper bounds for pipeline membership > - Create SCMAllocationManager that can be leveraged to get a candidate set of > datanodes based on placement policies > - Add the datanodes to internal datastructures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode
[ https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587 ] Siddharth Wagle edited comment on HDDS-1569 at 9/3/19 5:28 PM: --- [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should be through the Background pipeline creator job. We should not create any pipelines on client requests, in fact, we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic (current its round-robin allocation). was (Author: swagle): [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should be through the Background pipeline creator job. We should not create any pipelines on client requests, in fact, we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. > Add ability to SCM for creating multiple pipelines with same datanode > - > > Key: HDDS-1569 > URL: https://issues.apache.org/jira/browse/HDDS-1569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines > with datanodes that are not a part of sufficient pipelines > - Define soft and hard upper bounds for pipeline membership > - Create SCMAllocationManager that can be leveraged to get a candidate set of > datanodes based on placement policies > - Add the datanodes to internal datastructures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode
[ https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587 ] Siddharth Wagle edited comment on HDDS-1569 at 9/3/19 5:28 PM: --- [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should be through the Background pipeline creator job. We should not create any pipelines on client requests, in fact, we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. was (Author: swagle): [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should through the Background pipeline creator job. We should not create any pipelines on client requests, in fact we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic. > Add ability to SCM for creating multiple pipelines with same datanode > - > > Key: HDDS-1569 > URL: https://issues.apache.org/jira/browse/HDDS-1569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines > with datanodes that are not a part of sufficient pipelines > - Define soft and hard upper bounds for pipeline membership > - Create SCMAllocationManager that can be leveraged to get a candidate set of > datanodes based on placement policies > - Add the datanodes to internal datastructures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDDS-1569) Add ability to SCM for creating multiple pipelines with same datanode
[ https://issues.apache.org/jira/browse/HDDS-1569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16921587#comment-16921587 ] Siddharth Wagle edited comment on HDDS-1569 at 9/3/19 5:29 PM: --- [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should be through the Background pipeline creator job. We should not create any pipelines on client requests, in fact, we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, no need of blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic (current its round-robin allocation). was (Author: swagle): [~timmylicheng] I believe by internal data structures, I meant the _PipelineStateMap_ but I will take a look at the doc/code and get back if any other data structures that need an update. The only way new pipelines are created should be through the Background pipeline creator job. We should not create any pipelines on client requests, in fact, we should assume pipelines are available to SCM already and no ad-hoc pipelines will be created. If a single thread creates pipelines, not need to blocking queues or synchronization, except the utilization counters used for selecting pipeline need to be Atomic (current its round-robin allocation). > Add ability to SCM for creating multiple pipelines with same datanode > - > > Key: HDDS-1569 > URL: https://issues.apache.org/jira/browse/HDDS-1569 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task > Components: SCM >Reporter: Siddharth Wagle >Assignee: Li Cheng >Priority: Major > > - Refactor _RatisPipelineProvider.create()_ to be able to create pipelines > with datanodes that are not a part of sufficient pipelines > - Define soft and hard upper bounds for pipeline membership > - Create SCMAllocationManager that can be leveraged to get a candidate set of > datanodes based on placement policies > - Add the datanodes to internal datastructures -- This message was sent by Atlassian Jira (v8.3.2#803003) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Siddharth Wagle updated HDDS-1189: -- Attachment: HDDS-1189.05.patch > Recon Aggregate DB schema and ORM > - > > Key: HDDS-1189 > URL: https://issues.apache.org/jira/browse/HDDS-1189 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, > HDDS-1189.03.patch, HDDS-1189.04.patch, HDDS-1189.05.patch > > > _Objectives_ > - Define V1 of the db schema for recon service > - The current proposal is to use jOOQ as the ORM for SQL interaction. For two > main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, > b) Allows code to schema and schema to code seamless transition, critical for > creating DDL through the code and unit testing across versions of the > application. > - Add e2e unit tests suite for Recon entities, created based on the design doc -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDDS-1189) Recon Aggregate DB schema and ORM
[ https://issues.apache.org/jira/browse/HDDS-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16808263#comment-16808263 ] Siddharth Wagle commented on HDDS-1189: --- Thanks [~elek] and [~avijayan] for your reviews. Addressed the comments in 05 as below: _License issue_: - Removed HSQLDB dependency and reverted to using in-memory sqlite for code generation, the hsqldb license is actually BSD but there was an alternate way out so went with that. - spring-jdbc is Apache v2 _Configuration issues_: - Used the recon.dbdir for constructing default url - There is no password field tag, checked source tree - The findbugs plugin does not apply recursively, hence need to be explicit > Recon Aggregate DB schema and ORM > - > > Key: HDDS-1189 > URL: https://issues.apache.org/jira/browse/HDDS-1189 > Project: Hadoop Distributed Data Store > Issue Type: Sub-task >Affects Versions: 0.5.0 >Reporter: Siddharth Wagle >Assignee: Siddharth Wagle >Priority: Major > Fix For: 0.5.0 > > Attachments: HDDS-1189.01.patch, HDDS-1189.02.patch, > HDDS-1189.03.patch, HDDS-1189.04.patch, HDDS-1189.05.patch > > > _Objectives_ > - Define V1 of the db schema for recon service > - The current proposal is to use jOOQ as the ORM for SQL interaction. For two > main reasons: a) powerful DSL for querying, that abstracts out SQL dialects, > b) Allows code to schema and schema to code seamless transition, critical for > creating DDL through the code and unit testing across versions of the > application. > - Add e2e unit tests suite for Recon entities, created based on the design doc -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org