What I mean to say is, Does hadoop internally assumes that all
installations on each nodes need to be in same location.
I was having hadoop installed on different location on 2 different nodes.
I configured  hadoop config files to be a part of same cluster.
But when I started hadoop on master, I saw it was also searching for
hadoop starting scripts in the same location as of master.
Do we have any workaround in these kind of situation or do I have to
reinstall hadoop again on same location as master.

Thanks,
Praveenesh

On Fri, Dec 23, 2011 at 6:26 PM, Michael Segel
<michael_se...@hotmail.com> wrote:
> Sure,
> You could do that, but in doing so, you will make your life a living hell.
> Literally.
>
> Think about it... You will have to manually manage each nodes config files...
>
> So if something goes wrong you will have a hard time diagnosing the issue.
>
> Why make life harder?
>
> Why not just do the simple think and make all of your DN the same?
>
> Sent from my iPhone
>
> On Dec 23, 2011, at 6:51 AM, "praveenesh kumar" <praveen...@gmail.com> wrote:
>
>> When installing hadoop on slave machines, do we have to install hadoop
>> at same locations on each machine ?
>> Can we have hadoop installation at different location on different
>> machines at same cluster ?
>> If yes, what things we have to take care in that case
>>
>> Thanks,
>> Praveenesh

Reply via email to