Rob
From: Sebastian Stipkovic sebastian.stipko...@gmail.com
Reply-To: user@giraph.apache.org
Date: Thursday, 5 December 2013 20:39
To: user@giraph.apache.org
Subject: out of core option
Hello,
I had setup giraph 1.1.0 with hadoop-0.20.203.0rc1 on a single
node cluster. It computes
: out of core option
Hello,
I had setup giraph 1.1.0 with hadoop-0.20.203.0rc1 on a single
node cluster. It computes a tiny graph successful. But if the
input graph is huge (5 GB), I get an OutOfMemory(Garbage Collector)
exception, although I had turned on the out-of-memory-option. The job
@giraph.apache.org
Date: Thursday, 5 December 2013 20:39
To: user@giraph.apache.org
Subject: out of core option
Hello,
I had setup giraph 1.1.0 with hadoop-0.20.203.0rc1 on a single
node cluster. It computes a tiny graph successful. But if the
input graph is huge (5 GB), I get an OutOfMemory(Garbage
Each worker is allocated *mapred.child.java.opts *memory, which in your
case is 4000M. Check if your server doesn't have enough memory for 2
Mappers. Also the out of memory option is available in two forms.
1. Out of core graph
2. Out of core messages.
Currently you are setting only the out of
sebastian.stipko...@gmail.com
Reply-To: user@giraph.apache.org
Date: Thursday, 5 December 2013 20:39
To: user@giraph.apache.org
Subject: out of core option
Hello,
I had setup giraph 1.1.0 with hadoop-0.20.203.0rc1 on a single
node cluster. It computes a tiny graph successful. But if the
input
Hi Ameya,
thanks for the answer. My allocated memory was too high. My server has
altogether 4000M. I have turned the memory down to 2000M for each Mapper.
Now I have set both out of core options and get the following exception:
2013-12-05 23:10:18,568 INFO org.apache.hadoop.mapred.JobTracker:
: out of core option
Hello,
I had setup giraph 1.1.0 with hadoop-0.20.203.0rc1 on a single
node cluster. It computes a tiny graph successful. But if the
input graph is huge (5 GB), I get an OutOfMemory(Garbage Collector)
exception, although I had turned on the out-of-memory-option. The job