Hi Guys
The OutOfMemoryError might be solved be adding
"-Dmapreduce.map.memory.mb=14848". But in my tests, I found some more
problems during running out of core graph.
I did two tests with 150G 10^10 vertices input in 1.2 version, and it seems
like it not necessary to add like "giraph.userPartiti
Hi All
In my case, this problem can be solved by adding like
"-Dmapreduce.map.memory.mb=14848" in command line. Just need to make sure
there is no other soft/hard memory limit for your account.
Thanks for help.
Best,
Hai
On Sat, Nov 5, 2016 at 2:46 AM, Panagiotis Liakos
wrote:
> Hi all,
>
: Worker failed
during input split (currently not supported)
Seems this error is just like https://issues.apache.
org/jira/browse/GIRAPH-904 but there is no upper case in my hostnames
Any ideas about this?
Many Thanks,
Hai
On Sun, Oct 23, 2016 at 8:36 AM, Hai Lan wrote:
> Thanks Ag
16 at 7:11 AM, Agrta Rawat wrote:
> Hi Hai,
>
> Please check your giraph configurations. Try increasing min and max RAM
> size in your configurations.
> This should help.
>
> Regards,
> Agrta Rawat
>
>
> On Sat, Oct 22, 2016 at 7:46 PM, Hai Lan wrote:
>
>> C
Can anyone help with this?
Thanks a lot!
On Thu, Oct 20, 2016 at 9:48 PM, Hai Lan wrote:
> Dear all,
>
> I'm facing a problem when I run large graph job (currently 1.6T, will be
> 16T then), it always shows java.lang.OutOfMemoryError: Java heap
> space error when loaded
Dear all,
I'm facing a problem when I run large graph job (currently 1.6T, will be
16T then), it always shows java.lang.OutOfMemoryError: Java heap
space error when loaded specific numbers of vertex(near 5900). I tried
to add like:
-Dgiraph.useOutOfCoreGraph=true
-Dmapred.child.java.opts="-XX
Dear all,
I am new to Giraph and facing a problem to remove edges. I'm trying to
remove all edges pointed to some specific vertexes before next superstep.
e.g. vertex value =0 by using
removeEdges()
my input file are like:
[0,0.0,[[0,0]]]
[1,1.0,[[0,0]]]
[2,1.0,[[0,0],[1,0]]]
[3,1.0,[[0,0],[1,0]
Hai
Hai Lan, PhD student
h...@umd.edu
Department of Geographical Science
University of Maryland, College Park
1104 LeFrak Hall
College Park, MD 20742, USA
On Fri, May 22, 2015 at 6:32 AM, Lukas Nalezenec <
lukas.naleze...@firma.seznam.cz> wrote:
> On 22.5.2015 12:25, Hai
Hello,
I’m trying to run Giraph job with 180092160 vertex on a 18 nodes 440G memory
cluster. I used 144 workers with default partitioning. However, my job is
always killed after superstep 0 with error as following:
2015-05-22 05:20:57,668 ERROR [org.apache.giraph.master.MasterThread]
org.apach