Hi Hassan,
Thanks for the elaborate response. I am not running Giraph jobs paralleley
, i am trying to run one job with 900M edges .
I have removed the _bsp folder as well before every run .
I did also checkout the latest code from phabricator commit. Out of Core
works perfectly for 300M records
Ramesh,
The out-of-core mechanism keeps spilled data in files in local job
directory, which is usually obtained from Hadoop's "mapred.job.id". This
should be different from one run to another, so there shouldn't be any
conflict between different runs using out-of-core mechanism. However, you
may h
Thanks Hassan. I have removed the checkpointing, still getting a different
error
*Script :*
hadoop jar
/usr/local/giraph.back.1.2.0/giraph-examples/target/giraph-examples-1.2.0-SNAPSHOT-for-hadoop-2.7.0-jar-with-dependencies.jar
org.apache.giraph.GiraphRunner -Dmapreduce.task.timeout=1200
-Dm
Hi Ramesh!
Thanks for bringing this up, and thanks for trying out the new out-of-core
mechanism. The new out-of-core mechanism has not been integrated with
checkpointing yet. This is part of an ongoing project, and we should have
the integration within a few weeks. In the meantime, you can try
out
PFA the correct logs for the concurrent exception
2016-05-14 19:10:55,733 ERROR [ooc-io-0]
org.apache.giraph.utils.LogStacktraceCallable: Execution of callable
failed
java.lang.RuntimeException: java.io.EOFException
at
org.apache.giraph.ooc.OutOfCoreIOCallable.call(OutOfCoreIOCallable.jav
Hi Team,
I have the latest build of giraph running on a 5 node cluster. When i try
to use OutofCore Graph option for a huge data set like 600Milion edges i am
running into
the following exception. Please find below the script being executed and
the exception logs. I have tried all possible ways a