[
https://issues.apache.org/jira/browse/HADOOP-1563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Doug Cutting updated HADOOP-1563:
---------------------------------
Attachment: httpfs2.patch
This fixes the 'name = name' issue Tom pointed out, and permits file lengths
longer than 2^31. I agree that this needs unit tests before it can be
committed. I'd also like to first implement a servlet for HDFS to test that
performance is acceptable.
I don't see an easy way to handle S3 with this, exposing it as a hierarchical
space of slash-delimited directories, except perhaps to write a servlet that
proxies directory listings and redirects for file content.
> Create FileSystem implementation to read HDFS data via http
> -----------------------------------------------------------
>
> Key: HADOOP-1563
> URL: https://issues.apache.org/jira/browse/HADOOP-1563
> Project: Hadoop
> Issue Type: New Feature
> Components: fs
> Affects Versions: 0.14.0
> Reporter: Owen O'Malley
> Assignee: Chris Douglas
> Attachments: httpfs.patch, httpfs2.patch
>
>
> There should be a FileSystem implementation that can read from a Namenode's
> http interface. This would have a couple of useful abilities:
> 1. Copy using distcp between different versions of HDFS.
> 2. Use map/reduce inputs from a different version of HDFS.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.