If you expected your job to take a while and you just want to make sure the
job doesn't fail due to timeouts, you can set the hadoop parameter
mapreduce.task.timeout to a higher value or to 0 for no timeouts at all.
If you didn't expect your job to take a while on a single worker, it's hard
to
java.lang.InterruptedException; must be caught or
declared to be thrown
Can anyone confirm if they can get the cdh4.1.2 profile to compile as-is?
The actual version I want to eventually compile for is cdh4.2.0, does
anyone have any tips on how to get Giraph working on that?
Thanks,
Manuel Lagang
munge.symbolsHADOOP_1_SECRET_MANAGER/munge.symbols
to
properties
hadoop.version2.0.0-cdh4.4.0/hadoop.version
munge.symbolsHADOOP_1_SECRET_MANAGER/munge.symbols
Let me know if this works!!!
Thanks,
Ameya
On Mon, Dec 2, 2013 at 12:58 PM, Manuel Lagang
I get the same error when I compile Giraph against the default hadoop
version (0.20.203.0), but my project that uses Giraph uses a more recent
hadoop version. Did you set the hadoop version via a maven profile when
compiling Giraph (e.g. mvn -Phadoop_1.0 compile for hadoop 1.0)?
Presumably,
I also had the same issues when I used the out-of-core features, even for
trivial datasets, when I used the 1.0.0-RC3 branch. The job would seem to
finish all supersteps, but it would hang during the final output of data to
HDFS. I found that if I used the latest code in trunk instead (which
pointer exceptions in
org.apache.giraph.comm.SendCache.removeWorkerData(SendCache.java:199).
So I must be confused about the meaning of these variables, and what the
legal values are. Can anyone enlighten me on how (if possible) I can get
the behavior I want?
Thanks,
Manuel Lagang