You should try using the appropriate memory settings (i.e.
-Dmapred.child.java.opts="-Xms30g -Xmx30g -Xss128k") for a 30 GB heap.
This depends on how much memory you can get.
Avery
On 7/9/12 5:57 AM, Amani Alonazi wrote:
Actually, I had the same problem of running out of memory with Giraph
Actually, I had the same problem of running out of memory with Giraph when
trying to implement strongly connected components algorithm on Giraph. My
input graph is 1 million nodes and 7 million edges.
I'm using cluster of 21 computers.
On Mon, Jul 9, 2012 at 3:44 PM, Benjamin Heitmann <
benjamin
Hello Stephen,
sorry for the very late reply.
On 28 Jun 2012, at 02:50, Fleischman, Stephen (ISS SCI - Plano TX) wrote:
> Hello Avery and all:
> I have a cluster of 10 two-processor/48 GB RAM servers, upon which we are
> conducting Hadoop performance characterization tests. I plan to use t