[ 
https://issues.apache.org/jira/browse/HDFS-13381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16436073#comment-16436073
 ] 

Daryn Sharp commented on HDFS-13381:
------------------------------------

The overall intent of using file id paths is completely decoupling the file id 
collector from the namesystem.  One of the cited reasons for an 
internal/external file collector was the perceived inability to lookup by id – 
which is already possible via inode paths.

More than one file collector implementation means the namesystem context 
abstraction is being violated.  SPS is not a true standalone service if there's 
two implementations of various internal components like the file collector.

The overhead to maintain and test a tightly and loosely coupled version will 
not be sustainable.  The context shim must be the only pluggable piece.  Please 
provide a single implementation of the collector that leverages inode id 
lookups via the context performing an inode path conversion.

 

 

> [SPS]: Use DFSUtilClient#makePathFromFileId() to prepare satisfier file path
> ----------------------------------------------------------------------------
>
>                 Key: HDFS-13381
>                 URL: https://issues.apache.org/jira/browse/HDFS-13381
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: Rakesh R
>            Assignee: Rakesh R
>            Priority: Major
>         Attachments: HDFS-13381-HDFS-10285-00.patch, 
> HDFS-13381-HDFS-10285-01.patch
>
>
> This Jira task will address the following comments:
>  # Use DFSUtilClient::makePathFromFileId, instead of generics(one for string 
> path and another for inodeId) like today.
>  # Only the context impl differs for external/internal sps. Here, it can 
> simply move FileCollector and BlockMoveTaskHandler to Context interface.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to