Hi All,

I had posted a question earlier regarding some not so intuitive error messages that I was getting on one of the clusters when trying to map/reduce. After many hours of googling :) i found a post that solved my problem.

http://www.mail-archive.com/core-user@hadoop.apache.org/msg07202.html.

One of our engineers ran way too many jobs that created enormous subdirs in $HADOOP_HOME/logs/userlogs. Deleting these subdirs under $HADOOP_HOME/logs/userlogs/ on the datanodes solved the problem. You can also set the cleanup in the hadoop-default.xml file by setting the cleanup time to x hours instead of 24. The specific param is userlogs.retain.

Just wanted to share this with you all.

Thanks,
Usman


--
Using Opera's revolutionary e-mail client: http://www.opera.com/mail/

Reply via email to