[ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17081664#comment-17081664
 ] 

Ayush Saxena commented on HDFS-15082:
-------------------------------------

Need to find a better place to document, so if somebody starts facing issues 
due to this introduction, anyway reaches it easily. Don't think we can document 
it at {{hdfs-defaults}}, haven't seen rbf related stuff on any Namenode config.
There are couple of the other ALT's too, we get a new RBF side config, make the 
default as -1, which means length won't be checked by default and if the ADMIN 
explicitly configures, then we keep this logic.
We can get the values from namenode, but I don't think that is too 
straightforward or might not be worth doing.

If none of the above looks better and if everyone is good with this. then just 
find a good place to document. :)

[~elgoiri] any thoughts on this, I am not sure but can the present approach be 
considered incompatible?

> RBF: Check each component length of destination path when add/update mount 
> entry
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-15082
>                 URL: https://issues.apache.org/jira/browse/HDFS-15082
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rbf
>            Reporter: Xiaoqiao He
>            Assignee: Xiaoqiao He
>            Priority: Major
>         Attachments: HDFS-15082.001.patch, HDFS-15082.002.patch, 
> HDFS-15082.003.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to