Thanks Jason.
My object is relatively small. But how do I pass it via the JobConf object?
Can you elaborate a bit...
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Sat, May 2, 2009 at 11:53 PM, jason hadoop wrote:
> If it is relatively small you ca
If it is relatively small you can pass it via the JobConf object, storing a
serialized version of your dataset.
If it is larger you can pass a serialized version via the distributed cache.
Your map task will need to deserialize the object in the configure method.
None of the above methods give you
How can I create a global variable for each node running my map task. For
example, a common ArrayList that my map function can access for every k,v
pair it works on. It doesnt really need to create the ArrayList everytime.
If I create it in the main function of the job, the map function gets a nul
In php I run exec commands with the job commands and it has a variable that
stores the exit status code.
Billy
"Mayuran Yogarajah"
wrote in message
news:49fc975a.3030...@casalemedia.com...
Billy Pearson wrote:
I done this with and array of commands for the jobs in a php script
checking
if you are using the sh or bash, the variable $? holds the exit status of
the last command to execute.
hadoop jar streaming.jar ...
if [ $? -ne 0 ]; then
echo "My job failed" 2>&1
exit 1
fi
Caution $? is the very last command to execute's exit status. It is easy to
run another command bef
Billy Pearson wrote:
I done this with and array of commands for the jobs in a php script checking
the return of the job to tell if it failed or not.
Billy
I have this same issue.. How do you check if a job failed or not? You
mentioned checking
the return code? How are you doing that ?
tha
Hi Todd,
Not sure if this is related, but our hadoop cluster in general is
getting more and more unstable. the logs are full of this error
message (but having trouble tracking down the root problem):
2009-05-02 11:30:39,294 INFO org.apache.hadoop.mapred.TaskTracker:
org.apache.hadoop.util
Hi.
Actually, I'm trying to use the getDiskStatus() function, but it doesn't
seem to work so well in 0.18.3.
Can someone advice of a reliable way to get the HDFS overall free and used
space?
Same function that reports the space to the NameNode web panel?
Thanks.
2009/5/2 Edward Capriolo
> You
You can also pull these variables from the name node, datanode with
JMX. I am doing this to graph them with cacti. Both the JMX READ/WRITE
and READ user can access this variable.
On Tue, Apr 28, 2009 at 8:29 AM, Stas Oskin wrote:
> Hi.
>
> Any idea if the getDiskStatus() function requires superus
I done this with and array of commands for the jobs in a php script checking
the return of the job to tell if it failed or not.
Billy
"Dan Milstein" wrote in
message news:58d66a11-b59c-49f8-b72f-7507482c3...@hubteam.com...
If I've got a sequence of streaming jobs, each of which depends on the
10 matches
Mail list logo