, 2011 3:58:19 AM
Subject: RE: Any reason Hadoop logs cant be directed to a separate filesystem?
Yes, and its called using cron and writing a simple ksh script to clear out any
files that are older than 15 days.
There may be another way, but that's really the easiest.
> Date: Thu, 23
ack Craig
> To: "common-user@hadoop.apache.org"
> Sent: Wed, 22 June, 2011 2:00:23 PM
> Subject: Re: Any reason Hadoop logs cant be directed to a separate filesystem?
>
> Thx to both respondents.
>
> Note i've not tried this redirection as I have only production
Hi,
Can I limit the log file duration ?
I want to keep files for last 15 days only.
Regards,
Jagaran
From: Jack Craig
To: "common-user@hadoop.apache.org"
Sent: Wed, 22 June, 2011 2:00:23 PM
Subject: Re: Any reason Hadoop logs cant be directed to
Thx to both respondents.
Note i've not tried this redirection as I have only production grids available.
Our grids are growing and with them, log volume.
As until now that log location has been in the same fs as the grid data,
so running out of space due log bloat is a growing problem.
>From yo
Jack,
I believe the location can definitely be set to any desired path.
Could you tell us the issues you face when you change it?
P.s. The env var is used to set the config property hadoop.log.dir
internally. So as long as you use the regular scripts (bin/ or init.d/
ones) to start daemons, it wo
Looks like you missed the '#' in line beginning
Feel free to set HADOOP_LOG_DIR in that script or elsewhere
On 6/22/11 1:02 PM, "Jack Craig" wrote:
>Hi Folks,
>
>In the hadoop-env.sh, we find, ...
>
># Where log files are stored. $HADOOP_HOME/logs by default.
># export HADOOP_LOG_DIR=${HADOOP_
Hi Folks,
In the hadoop-env.sh, we find, ...
# Where log files are stored. $HADOOP_HOME/logs by default.
# export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
is there any reason this location could not be a separate filesystem on the
name node?
Thx, jackc...
Jack C