[ 
https://issues.apache.org/jira/browse/HADOOP-3199?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12586824#action_12586824
 ] 

Ankur commented on HADOOP-3199:
-------------------------------

> A significant disadvantage of FTP over HTTP is ...

This surely is an issue with naive clients.  A workaround that I can think of 
is to have FTP servers running on all the nodes. 
We can then have slightly intelligent FTP clients that understand redirects. 

Any other suggestions to get around the bottleneck?




> Need an FTP Server implementation over HDFS
> -------------------------------------------
>
>                 Key: HADOOP-3199
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3199
>             Project: Hadoop Core
>          Issue Type: New Feature
>          Components: dfs
>    Affects Versions: 0.16.2
>            Reporter: Ankur
>
> An FTP server that sits on top of a distributed filesystem like HDFS has many 
> benefits. It allows the storage and management of data via clients that do 
> not know HDFS but understand other more popular transport mechanism like FTP. 
> The data is thus managed via a standard and more popular protocol, support 
> for which is widely available.
> The idea is to leverage what is already available in Apache  
> http://mina.apache.org/ftpserver.html and build on top of it. This FTP server 
> can be embedded easily in hadoop and can easily be programmed to talk to HDFS 
> via an Ftplet which is run by the FTP server.
> Ideally there should be options to configure FTP server settings (in 
> hadoop-default.xml) which allows FTP server to be started when HDFS is 
> booted. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to