Hi All
In my case, this problem can be solved by adding like
"-Dmapreduce.map.memory.mb=14848" in command line. Just need to make sure
there is no other soft/hard memory limit for your account.
Thanks for help.
Best,
Hai
On Sat, Nov 5, 2016 at 2:46 AM, Panagiotis Liakos
wrote:
> Hi all,
>
Hi Panagioti,
This really help me. Thanks for your help!
Regards,
Xenia
2016-11-06 10:15 GMT+02:00 Panagiotis Liakos :
> Hi Xenia,
>
> The value in mapred-site.xml would be used in case you submitted the
> giraph job through hadoop, e.g., with hadoop jar
>
> With your code I believe all you nee
Hi Xenia,
The value in mapred-site.xml would be used in case you submitted the
giraph job through hadoop, e.g., with hadoop jar
With your code I believe all you need to set is the run configuration
for this specific Java class.
Through eclipse, right-click your java file, go to Run as -> Run
conf
Hi all,
I execute my project from inside eclipse. Maybe this is the problem and
this value in mapred-site.xml didn't recognised?
Basically I have a file with the following code in order to run my project
through eclipse.
public int run(String[] arg0) throws Exception{
..
giraphConf.setWorkerC
Hi all,
This property in hadoop/conf/mapred-site.xml works for me:
mapred.map.child.java.opts
-Xmx10g
Regards,
Panagiotis
2016-11-04 23:11 GMT+02:00 Xenia Demetriou :
> Hi,
> I have the same problem and I add the following in
> mapred-site.xml and hadoop-env.sh but I still have the same prob
Hi,
I have the same problem and I add the following in
mapred-site.xml and hadoop-env.sh but I still have the same problem.
I try various values below but nothhing increase the memory.
mapred-site.xml:
mapred.child.java.opts
-Xms256m
-Xmx4096m
hadoop-env.sh:
export HADOOP_HEAPSIZE
Hi,
Didi you tried running your code on a low size data set? Did it work? And
you have to increase xms and Xmx options in hasoop configuration file. I
exactly do mot remember the file name but probably in mapred-site.xml you
will be able to find such entry.
:)
Thanks,
Agrta Rawat
On Sun, Oct 23,
More info:
If I add -Dgiraph.useOutOfCoreGraph=true it can run successfully but
superstep -1 is extremely slow. If I do not add
Dgiraph.useOutOfCoreGraph=true, it
loads much faster but will show error at waiting about last 10 workers to
finished superstep -1. The error is:
org.apache.giraph.mast
Thanks Agrta
Thanks for your response. How exact I can do to increase min and max
RAM?(in which conf file or by using any command/arguments? my
giraph-site.xml is empty as default).
As I saw online how to increase the heap size(not sure it is the same thing
like you mentioned min max RAM size), m
Hi Hai,
Please check your giraph configurations. Try increasing min and max RAM
size in your configurations.
This should help.
Regards,
Agrta Rawat
On Sat, Oct 22, 2016 at 7:46 PM, Hai Lan wrote:
> Can anyone help with this?
>
> Thanks a lot!
>
>
> On Thu, Oct 20, 2016 at 9:48 PM, Hai Lan wr
Can anyone help with this?
Thanks a lot!
On Thu, Oct 20, 2016 at 9:48 PM, Hai Lan wrote:
> Dear all,
>
> I'm facing a problem when I run large graph job (currently 1.6T, will be
> 16T then), it always shows java.lang.OutOfMemoryError: Java heap
> space error when loaded specific numbers of ver
Dear all,
I'm facing a problem when I run large graph job (currently 1.6T, will be
16T then), it always shows java.lang.OutOfMemoryError: Java heap
space error when loaded specific numbers of vertex(near 5900). I tried
to add like:
-Dgiraph.useOutOfCoreGraph=true
-Dmapred.child.java.opts="-XX
12 matches
Mail list logo