Re: Out of memory with giraph-release-1.0.0-RC3, used to work on old Giraph

2013-09-04 Thread Jeff Peters
y high-water mark: start: 300 Gb end: 300 Gb > Iteration 1 XYZ > Iteration 2 Computation: Memory high-water mark: start: 300 Gb end: 300 Gb > . > . > . > > Lukas > > > > > > On 09/04/13 01:12, Jeff Peters wrote: > > Thank you Lukas!!! That's EXACT

Re: Out of memory with giraph-release-1.0.0-RC3, used to work on old Giraph

2013-09-03 Thread Jeff Peters
to trim it if you like. > > The byte arrays for the edges are the most efficient storage possible > (although not as performance as the native edge stores). > > Hope that helps, > > Avery > > On 8/29/13 4:53 PM, Jeff Peters wrote: > > Avery, it would seem tha

Re: Out of memory with giraph-release-1.0.0-RC3, used to work on old Giraph

2013-08-30 Thread Jeff Peters
not as performance as the native edge stores). > > Hope that helps, > > Avery > > > On 8/29/13 4:53 PM, Jeff Peters wrote: > > Avery, it would seem that optimizations to Giraph have, unfortunately, > turned the majority of the heap into "dark matter". The

Re: Out of memory with giraph-release-1.0.0-RC3, used to work on old Giraph

2013-08-28 Thread Jeff Peters
g a histogram of memory usage from a running JVM and see where > the memory is going. I can't think of anything in particular that > changed... > > > On 8/28/13 4:39 PM, Jeff Peters wrote: > >> >> I am tasked with updating our ancient (circa 7/10/2012) Giraph to >

Out of memory with giraph-release-1.0.0-RC3, used to work on old Giraph

2013-08-28 Thread Jeff Peters
I am tasked with updating our ancient (circa 7/10/2012) Giraph to giraph-release-1.0.0-RC3. Most jobs run fine but our largest job now runs out of memory using the same AWS elastic-mapreduce configuration we have always used. I have never tried to configure either Giraph or the AWS Hadoop. We build