[ 
https://issues.apache.org/jira/browse/HDFS-10721?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15409617#comment-15409617
 ] 

John Zhuge commented on HDFS-10721:
-----------------------------------

Sorry for the confusion: {{c_user}} is a new HDFS user with read-only access to 
{{/data}}, created specifically to provide a workaround. I should have named 
the user {{readonly_b_webapp}} :)

Do agree with you on that export table like Unix NFSv3 or NFSv4 server gives 
the admin more controls. The export table probably should support allowed 
client list and export options per export point.

> HDFS NFS Gateway - Exporting multiple Directories 
> --------------------------------------------------
>
>                 Key: HDFS-10721
>                 URL: https://issues.apache.org/jira/browse/HDFS-10721
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: hdfs
>            Reporter: Senthilkumar
>            Priority: Minor
>
> Current HDFS NFS gateway Supports exporting only one Directory.. 
> Example :  
>    <property>
>           <name>nfs.export.point</name>
>           <value>/user</value>
>      </property>
> This property helps us to export particular directory .. 
> Code Block : 
> public RpcProgramMountd(NfsConfiguration config,
>       DatagramSocket registrationSocket, boolean allowInsecurePorts)
>       throws IOException {
>     // Note that RPC cache is not enabled
>     super("mountd", "localhost", config.getInt(
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_KEY,
>         NfsConfigKeys.DFS_NFS_MOUNTD_PORT_DEFAULT), PROGRAM, VERSION_1,
>         VERSION_3, registrationSocket, allowInsecurePorts);
>     exports = new ArrayList<String>();
>      exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
>     this.hostsMatcher = NfsExports.getInstance(config);
>     this.mounts = Collections.synchronizedList(new ArrayList<MountEntry>());
>     UserGroupInformation.setConfiguration(config);
>     SecurityUtil.login(config, NfsConfigKeys.DFS_NFS_KEYTAB_FILE_KEY,
>         NfsConfigKeys.DFS_NFS_KERBEROS_PRINCIPAL_KEY);
>     this.dfsClient = new DFSClient(NameNode.getAddress(config), config);
>   }
> Export List:
> exports.add(config.get(NfsConfigKeys.DFS_NFS_EXPORT_POINT_KEY,
>         NfsConfigKeys.DFS_NFS_EXPORT_POINT_DEFAULT));
> Current Code is supporting only one directory to be exposed ... Based on our 
> example /user can be exported ..
> Most of the production environment expects more number of directories should 
> be exported and the same can be mounted for different clients.. 
> Example: 
> <property>
>           <name>nfs.export.point</name>
>           <value>/user,/data/web_crawler,/app-logs</value>
>      </property>
> Here i have three directories to be exposed ..
> 1)    /user
> 2)   /data/web_crawler
> 3)   /app-logs
> This would help us  to mount directories for particular client ( Say client A 
> wants to write data in /app-logs - Hadoop Admin can mount and handover to 
> clients  ).
> Please advise here..  Sorry if this feature is already implemented.. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to