[ 
https://issues.apache.org/jira/browse/HDFS-15082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17004700#comment-17004700
 ] 

Xiaoqiao He commented on HDFS-15082:
------------------------------------

It is useless entry at router side if we add/update mount entry with no length 
restriction component of destination path when enable 
`DFS_NAMENODE_MAX_COMPONENT_LENGTH_KEY` in namenode side, actually, it is 
enable with 255 by default. So I think we should check it and avoid unused 
entry to add/update into router mount table.
{quote}is it similar to HDFS-13576?{quote}
they are exactly the same. sorry for no watched before.
{quote}Are we having a separate configuration at the Router to specify the path 
length independent of the namespace?{quote}
Any other suggestions? I have thought reuse 
`dfs.namenode.fs-limits.max-component-length` at the beginning, but it seems 
that it will be more convenient to configure if we separate them IMO.

> RBF: Check each component length of destination path when add/update mount 
> entry
> --------------------------------------------------------------------------------
>
>                 Key: HDFS-15082
>                 URL: https://issues.apache.org/jira/browse/HDFS-15082
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>          Components: rbf
>            Reporter: Xiaoqiao He
>            Assignee: Xiaoqiao He
>            Priority: Major
>         Attachments: HDFS-15082.001.patch
>
>
> When add/update mount entry, each component length of destination path could 
> exceed filesystem path component length limit, reference to 
> `dfs.namenode.fs-limits.max-component-length` of NameNode. So we should check 
> each component length of destination path when add/update mount entry at 
> Router side.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to