There are systems for file-system plumbing out to user processes, and
FUSE does this on Linux, and there is a package for hadoop. However-
pretending a remote resource is local holds a place of honor on the
system design antipattern hall of fame.

On Wed, Apr 13, 2011 at 7:35 AM, Benson Margulies <bimargul...@gmail.com> wrote:
> Point taken.
>
> On Wed, Apr 13, 2011 at 10:33 AM, M. C. Srivas <mcsri...@gmail.com> wrote:
>> Sorry, don't mean to say you don't know mmap or didn't do cool things in the
>> past.
>> But you will see why anyone would've interpreted this original post, given
>> the title of the posting and the following wording, to mean "can I mmap
>> files that are in hdfs"
>> On Mon, Apr 11, 2011 at 3:57 PM, Benson Margulies <bimargul...@gmail.com>
>> wrote:
>>>
>>> We have some very large files that we access via memory mapping in
>>> Java. Someone's asked us about how to make this conveniently
>>> deployable in Hadoop. If we tell them to put the files into hdfs, can
>>> we obtain a File for the underlying file on any given node?
>>
>>
>



-- 
Lance Norskog
goks...@gmail.com

Reply via email to