[ 
https://issues.apache.org/jira/browse/HADOOP-713?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12543110
 ] 

Doug Cutting commented on HADOOP-713:
-------------------------------------

This looks good to me.

Do any unit tests actually exercise this?  They probably call 
getContentLength() on files, but does any unit test use this on a directory and 
check that the value is reasonable?  If not, we might add such a test.

One other minor thing: the cast to DfsPath in DistributedFileSystem.java 
immediately after the changes in the patch can be removed.

> dfs list operation is too expensive
> -----------------------------------
>
>                 Key: HADOOP-713
>                 URL: https://issues.apache.org/jira/browse/HADOOP-713
>             Project: Hadoop
>          Issue Type: Improvement
>          Components: dfs
>    Affects Versions: 0.8.0
>            Reporter: Hairong Kuang
>            Assignee: dhruba borthakur
>            Priority: Blocker
>             Fix For: 0.15.1
>
>         Attachments: optimizeComputeContentLen.patch, 
> optimizeComputeContentLen2.patch
>
>
> A list request to dfs returns an array of DFSFileInfo. A DFSFileInfo of a 
> directory contains a field called contentsLen, indicating its size  which 
> gets computed at the namenode side by resursively going through its subdirs. 
> At the same time, the whole dfs directory tree is locked.
> The list operation is used a lot by DFSClient for listing a directory, 
> getting a file's size and # of replicas, and getting the size of dfs. Only 
> the last operation needs the field contentsLen to be computed.
> To reduce its cost, we can add a flag to the list request. ContentsLen is 
> computed If the flag is set. By default, the flag is false.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to