FYI Im going to try what i suggested to make sure im not sending you off on
a wild goose chase @davide :)

got a centos box spun up, will let you know if it works from scratch.  then
you can copy the recipe.

I'll create a wiki page for it.


On Mon, Jul 21, 2014 at 6:44 PM, jay vyas <[email protected]>
wrote:

> I suggest using puppet as well, way easier than doing it manually.
>
> basically i think you could
>
> - clone down thebigtop github and checkout branch-0.7.0
> - put those puppet recipes on your bare metal nodes, and updeate the
> config csv file to point to ip of master
> - run puppet apply on each node
>
> thats it.  It should all i think just work automagically.
> right?
>
>
>
> On Mon, Jul 21, 2014 at 2:44 PM, David Fryer <[email protected]> wrote:
>
>> Yes, Bigtop 0.7.0 is installed.
>>
>> -David Fryer
>>
>>
>> On Mon, Jul 21, 2014 at 2:33 PM, Konstantin Boudnik <[email protected]>
>> wrote:
>>
>>> Sorry for being a nag - did you install Bigtop 0.7.0?
>>>
>>> Cc'ing dev@ list as well
>>>   Cos
>>>
>>> On Mon, Jul 21, 2014 at 01:15PM, David Fryer wrote:
>>> > I activated the bigtop yum repository, and installed the required
>>> hadoop
>>> > packages via yum. All of the computers in the cluster are running
>>> CentOS
>>> > 6.5.
>>> >
>>> > -David Fryer
>>> >
>>> >
>>> > On Mon, Jul 21, 2014 at 1:01 PM, Konstantin Boudnik <[email protected]>
>>> wrote:
>>> >
>>> > > I see that your daemon is trying to log to
>>> > > /usr/lib/hadoop/logs whereas Bigtop logs under /car/log as required
>>> by
>>> > > Linux services good behavior rules.
>>> > >
>>> > > The way namenode recognizes DNs isn't via slaves file, but by DNs
>>> register
>>> > > with NN via RPC mechanism.
>>> > >
>>> > > How did you install the Hadoop? Using Bigtop packages or via a
>>> different
>>> > > mechanism? The fact that you are seeing error message about cygwin
>>> not
>>> > > found tells me that you are using a derivative bits, not pure
>>> Bigtop. Is
>>> > > this the case?
>>> > >
>>> > > Regards
>>> > >   Cos
>>> > >
>>> > > On July 21, 2014 9:32:48 AM PDT, David Fryer <[email protected]>
>>> wrote:
>>> > > >When I tried starting hadoop using the init scripts provided, the
>>> > > >master
>>> > > >couldn't find any of the datanodes. It is my understanding that the
>>> > > >masters
>>> > > >file is optional, but the slaves file is required. The scripts that
>>> > > >reference the slaves file are named in plural (instead of
>>> > > >hadoop-daemon.sh,
>>> > > >use hadoop-daemons.sh). I tried modifying the init scripts to run
>>> > > >hadoop-daemons.sh, and the script attempted to spawn processes on
>>> the
>>> > > >slaves referenced in the slaves file, but that produced the error:
>>> > > >Starting Hadoop namenode:                                  [  OK  ]
>>> > > >slave2: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-loki.out
>>> > > >master: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-odin.out
>>> > > >slave3: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-tyr.out
>>> > > >slave1: starting namenode, logging to
>>> > > >/usr/lib/hadoop/logs/hadoop-hadoopuser-namenode-thor.out
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave2: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave3: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >master: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 34:
>>> > > >/usr/lib/hadoop-hdfs/bin/../libexec/hdfs-config.sh: No such file or
>>> > > >directory
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 150: cygpath: command
>>> not
>>> > > >found
>>> > > >slave1: /usr/lib/hadoop-hdfs/bin/hdfs: line 191: exec: : not found
>>> > > >
>>> > > >-David Fryer
>>> > > >
>>> > > >
>>> > > >On Mon, Jul 21, 2014 at 12:18 PM, Konstantin Boudnik <
>>> [email protected]>
>>> > > >wrote:
>>> > > >
>>> > > >> Hi David.
>>> > > >>
>>> > > >> Slaves files are really optional if I remember right. In Bigtop we
>>> > > >are
>>> > > >> usually
>>> > > >> deploy Hadoop with provided Puppet recipes which are
>>> battle-hardened
>>> > > >over
>>> > > >> the
>>> > > >> years :)
>>> > > >>
>>> > > >> Cos
>>> > > >>
>>> > > >> On Mon, Jul 21, 2014 at 10:53AM, David Fryer wrote:
>>> > > >> > Hi Bigtop!
>>> > > >> >
>>> > > >> > I'm working on trying to get hadoop running in distributed mode,
>>> > > >but the
>>> > > >> > init scripts don't seem to be referencing the slaves file in
>>> > > >> > /etc/hadoop/conf. Has anyone encountered this before?
>>> > > >> >
>>> > > >> > Thanks,
>>> > > >> > David Fryer
>>> > > >>
>>> > >
>>> > >
>>>
>>
>>
>
>
> --
> jay vyas
>



-- 
jay vyas

Reply via email to