I done this with and array of commands for the jobs in a php script checking
the return of the job to tell if it failed or not.
Billy
Dan Milstein dmilst...@hubteam.com wrote in
message news:58d66a11-b59c-49f8-b72f-7507482c3...@hubteam.com...
If I've got a sequence of streaming jobs, each of
You can also pull these variables from the name node, datanode with
JMX. I am doing this to graph them with cacti. Both the JMX READ/WRITE
and READ user can access this variable.
On Tue, Apr 28, 2009 at 8:29 AM, Stas Oskin stas.os...@gmail.com wrote:
Hi.
Any idea if the getDiskStatus()
Hi.
Actually, I'm trying to use the getDiskStatus() function, but it doesn't
seem to work so well in 0.18.3.
Can someone advice of a reliable way to get the HDFS overall free and used
space?
Same function that reports the space to the NameNode web panel?
Thanks.
2009/5/2 Edward Capriolo
Hi Todd,
Not sure if this is related, but our hadoop cluster in general is
getting more and more unstable. the logs are full of this error
message (but having trouble tracking down the root problem):
2009-05-02 11:30:39,294 INFO org.apache.hadoop.mapred.TaskTracker:
Billy Pearson wrote:
I done this with and array of commands for the jobs in a php script checking
the return of the job to tell if it failed or not.
Billy
I have this same issue.. How do you check if a job failed or not? You
mentioned checking
the return code? How are you doing that ?
if you are using the sh or bash, the variable $? holds the exit status of
the last command to execute.
hadoop jar streaming.jar ...
if [ $? -ne 0 ]; then
echo My job failed 21
exit 1
fi
Caution $? is the very last command to execute's exit status. It is easy to
run another command before
In php I run exec commands with the job commands and it has a variable that
stores the exit status code.
Billy
Mayuran Yogarajah
mayuran.yogara...@casalemedia.com wrote in message
news:49fc975a.3030...@casalemedia.com...
Billy Pearson wrote:
I done this with and array of commands for the