[ 
https://issues.apache.org/jira/browse/HDFS-13274?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17118420#comment-17118420
 ] 

Ayush Saxena commented on HDFS-13274:
-------------------------------------

GetServerDefaults doesn't goes to all Namespaces as of now, it goes to the 
Default NS if available else on any one of the available NS. We have cached 
that too in HDFS-15096.
RenewLease, I don't think we can change anything here, it is bound to go to all 
namespaces. and if one NS is slow, it is bound to suffer. May be some 
configuration tuning to change lease times and stuff can be done, depending 
upon the use case.
GetListing may take time if the list is tend to include mount entries, say if 
you are listing on / and have bunch of mount entries, since the number of 
children, permissions and all are need to fetched out from the Namenode and 
then the entry needs to be recreated. else if it is just a proxy, no mount 
entries, it shouldn't take much time. If you are having multiple destinations 
for mount points, it would take even more time, if listing is to include mount 
entries.

You said there are 16 Routers, do all of them are having similar numbers?

> RBF: Extend RouterRpcClient to use multiple sockets
> ---------------------------------------------------
>
>                 Key: HDFS-13274
>                 URL: https://issues.apache.org/jira/browse/HDFS-13274
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Íñigo Goiri
>            Assignee: Íñigo Goiri
>            Priority: Major
>
> HADOOP-13144 introduces the ability to create multiple connections for the 
> same user and use different sockets. The RouterRpcClient should use this 
> approach to get a better throughput.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to