Hi, Claudio,
I have set the following when ran the program:
export HADOOP_DATANODE_OPTS=-Xmx10g
and
export HADOOP_HEAPSIZE=3
in hadoop-env.sh and restarted hadoop.
Best Regards,
Suijian
2014-03-06 17:29 GMT-06:00 Claudio Martella claudio.marte...@gmail.com:
did you actually
The current setting is:
namemapred.child.java.opts/name
value-Xmx6144m -XX:+UseParallelGC -mx1024m -XX:MaxHeapFreeRatio=10
-XX:MinHeapFreeRatio=10/value
Is 6144MB enough( for each task tracker)? I.e: I have 39 nodes to process
the 8*2GB input files.
Best Regards,
Suijian
2014-03-07
7, each node has a datanode and a tasktracker running on it. I attach the
full file here:
2014.03.07|10:13:17~/HadoopSetupTest/hadoop-1.2.1/confcat mapred-site.xml
?xml version=1.0?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!-- Put site-specific property overrides in this file. --
I tried to set mapred.tasktracker.map.tasks.maximum to 1, then giraph
stucks even for the testing tiny small input graph. Setting it to 2 works,
but processing the big graph still stucks there for 5*2GB input
files(with-Xmx16g and mapred.job.tracker.handler.count=8 now ):
14/03/07 16:42:17
Hi, Experts,
I'm trying to process a graph by pagerank in giraph, but the program
always stucks there.
There are 8 input files, each one is with size ~2GB and all copied onto
HDFS. I use 39 nodes and each node has 16GB Mem and 8 cores. It keeps
printing the same info(as the following) on the
Hi,
I tried to process only 2 of the input files, i.e, 2GB + 2GB input, the
program finished successfully in 6 minutes. But as I have 39 nodes, they
should be enough to load and process the 8*2GB=16GB size graph? Can
somebody help to give some hints( Will all the nodes participate in graph
did you actually increase the heap?
On Thu, Mar 6, 2014 at 11:43 PM, Suijian Zhou suijian.z...@gmail.comwrote:
Hi,
I tried to process only 2 of the input files, i.e, 2GB + 2GB input, the
program finished successfully in 6 minutes. But as I have 39 nodes, they
should be enough to load and