Zander,
I've looked at my datanode logs on the slaves, but they are all in quite
small sizes, although we've run many jobs on them.
And running 2 new jobs also didn't add anything to them.
(As I understand from the contents of the logs, hadoop logs especially
operations about DFS performance tests.)

Cheers,
Rasit

2009/2/20 zander1013 <zander1...@gmail.com>

>
> hi,
>
> i am setting up hadoop for the first time on multi-node cluster. right now
> i
> have two nodes. the two node cluster consists of two laptops connected via
> ad-hoc wifi network. they they do not have access to the internet. i
> formated the datanodes on both machines prior to startup...
>
> output form the commands /usr/local/hadoop/bin/start-all.sh, jps (on both
> machines), and /usr/local/hadoop/bin/stop-all.sh all appear normal. however
> the file /usr/local/hadoop/logs/hadoop-hadoop-datanode-node1.log (the slave
> node) is empty.
>
> the same file for the master node shows the startup and shutdown events as
> normal and without error.
>
> is it okay that the log file on the slave is empty?
>
> zander
> --
> View this message in context:
> http://www.nabble.com/empty-log-file...-tp22113398p22113398.html
> Sent from the Hadoop core-user mailing list archive at Nabble.com.
>
>


-- 
M. Raşit ÖZDAŞ

Reply via email to