[ 
https://issues.apache.org/jira/browse/HADOOP-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12511353
 ] 

Tom White commented on HADOOP-1563:
-----------------------------------

bq. I don't see an easy way to handle S3 with this, exposing it as a 
hierarchical space of slash-delimited directories, except perhaps to write a 
servlet that proxies directory listings and redirects for file content.

The proxy idea sounds good - the servlet pseudo code would be something like:
{noformat} 
if path is not slash-terminated
  if HEAD S3 path is successful
    redirect to S3 resource at path
  else
    redirect to path/
else
  GET S3 bucket with prefix = path, delimiter = /
  if bucket is empty
    return 404
  else
    return bucket contents as XHTML
{noformat} 

(Of course, the work to do this would go in a new Jira issue.)

> Create FileSystem implementation to read HDFS data via http
> -----------------------------------------------------------
>
>                 Key: HADOOP-1563
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1563
>             Project: Hadoop
>          Issue Type: New Feature
>          Components: fs
>    Affects Versions: 0.14.0
>            Reporter: Owen O'Malley
>            Assignee: Chris Douglas
>         Attachments: httpfs.patch, httpfs2.patch
>
>
> There should be a FileSystem implementation that can read from a Namenode's 
> http interface. This would have a couple of useful abilities:
>   1. Copy using distcp between different versions of HDFS.
>   2. Use map/reduce inputs from a different version of HDFS. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to