Hello Allen,

Sorry for bugging you regarding the same problem again. If you say "we need
to be explicit having multiple file-systems" for map reduce jobs, are you
hinting on code changes to be made to hadoop ? Please provide more details
on this if possible.

Thanks,
Vikas

On Sat, Jun 12, 2010 at 9:05 AM, Vikas Ashok Patil <vikas...@buffalo.edu>wrote:

> Hello Allen,
>
> Thanks for the reply.
>
> You are right about trying to run two distributed filesystems. The reason
> being, there are certain restrictions (in our cluster environment) to
> include the local file system into lustre. Please tell me how would I make
> mapreduce access more than one file system. At least the configs don't seem
> to allow it.
>
> Thanks,
> Vikas A Patil
>
>
> On Sat, Jun 12, 2010 at 12:32 AM, Allen Wittenauer <
> awittena...@linkedin.com> wrote:
>
>> On Jun 10, 2010, at 8:27 PM, Vikas Ashok Patil wrote:
>>
>> > Thanks for the replies.
>> >
>> > If I have fs.default.name = file://my_lustre_mount_point , then only
>> the
>> > lustre filesystem will be used. I would like to have something like
>> >
>> > fs.default.name=file://my_lustre_mount_point , hdfs://localhost:9123
>> >
>> > so that both local filesystem and lustre are in use.
>> >
>> > Kindly correct me if I am missing something here.
>>
>> I guess we're all confused as to your use case.  Why do you want to run
>> two distributed file systems on the same nodes?  Why can't you use Lustre
>> for all your needs?
>>
>> As to fs.default.name, you can only have one.  [That's why it is a
>> default. *smile*]  If you want to access more than one file system from
>> within MapReduce, you'll need to specify it explicitly.
>
>
>

Reply via email to