[ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17005115#comment-17005115
 ] 

Xiaoqiao He commented on HDFS-15082:
------------------------------------

Hi [~ayushtkn], Thanks for your comments.
This thought comes from [~elgoiri]'s 
[comments|https://issues.apache.org/jira/browse/HDFS-15051?focusedCommentId=16994972&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16994972],
 Since we open the add/update mount entry privilege to end user, there are some 
cases that these operations will be not valid such as path component too long. 
If end user create an invalid entry but not clean it for this situation in 
time, it will pile up more and more entries and not used anymore without any 
auto clean mechanism now. It may be reduce the performance of 
query/write/delete operations.(Sorry I do not collect detailed benchmark 
result, just observed the latency while there are hundreds of thousands of 
znodes(mount table entries & delegation token entries) with 
ZKDelegationTokenSecretManager) So my thought is,
a. revoke mount entry operation privilege to super user only, (it looks that 
this solution is not proper for [~elgoiri]'s cases.) 
b. limit the invalid entries into mount table. (PathComponentTooLongException 
is one case of them)
For the configuration value, the current demo patch is not the final solution, 
welcome some more suggestions. Of course, it is not very graceful for different 
namespaces with different max-component-limit values. What about give a bigger 
default value(such as 512/1024) to limit it. other thought?

> RBF: Check each component length of destination path when add/update mount 
> entry
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-15082
>                 URL: https://issues.apache.org/jira/browse/HDFS-15082
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rbf
>            Reporter: Xiaoqiao He
>            Assignee: Xiaoqiao He
>            Priority: Major
>         Attachments: HDFS-15082.001.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to