Yes. But only one such block. That is what I meant by chunk. That is fine if you want that chunk but if you want to mmap the entire file, it isn't real useful.
On Mon, Apr 11, 2011 at 6:48 PM, Jason Rutherglen < [email protected]> wrote: > What do you mean by local chunk? I think it's providing access to the > underlying file block? > > On Mon, Apr 11, 2011 at 6:30 PM, Ted Dunning <[email protected]> > wrote: > > Also, it only provides access to a local chunk of a file which isn't very > > useful. > > > > On Mon, Apr 11, 2011 at 5:32 PM, Edward Capriolo <[email protected]> > > wrote: > >> > >> On Mon, Apr 11, 2011 at 7:05 PM, Jason Rutherglen > >> <[email protected]> wrote: > >> > Yes you can however it will require customization of HDFS. Take a > >> > look at HDFS-347 specifically the HDFS-347-branch-20-append.txt patch. > >> > I have been altering it for use with HBASE-3529. Note that the patch > >> > noted is for the -append branch which is mainly for HBase. > >> > > >> > On Mon, Apr 11, 2011 at 3:57 PM, Benson Margulies > >> > <[email protected]> wrote: > >> >> We have some very large files that we access via memory mapping in > >> >> Java. Someone's asked us about how to make this conveniently > >> >> deployable in Hadoop. If we tell them to put the files into hdfs, can > >> >> we obtain a File for the underlying file on any given node? > >> >> > >> > > >> > >> This features it not yet part of hadoop so doing this is not > "convenient". > > > > >
