y high-water mark: start: 300 Gb end: 300 Gb
> Iteration 1 XYZ
> Iteration 2 Computation: Memory high-water mark: start: 300 Gb end: 300 Gb
> .
> .
> .
>
> Lukas
>
>
>
>
>
> On 09/04/13 01:12, Jeff Peters wrote:
>
> Thank you Lukas!!! That's EXACT
to trim it if you like.
>
> The byte arrays for the edges are the most efficient storage possible
> (although not as performance as the native edge stores).
>
> Hope that helps,
>
> Avery
>
> On 8/29/13 4:53 PM, Jeff Peters wrote:
>
> Avery, it would seem tha
not as performance as the native edge stores).
>
> Hope that helps,
>
> Avery
>
>
> On 8/29/13 4:53 PM, Jeff Peters wrote:
>
> Avery, it would seem that optimizations to Giraph have, unfortunately,
> turned the majority of the heap into "dark matter". The
g a histogram of memory usage from a running JVM and see where
> the memory is going. I can't think of anything in particular that
> changed...
>
>
> On 8/28/13 4:39 PM, Jeff Peters wrote:
>
>>
>> I am tasked with updating our ancient (circa 7/10/2012) Giraph to
>
I am tasked with updating our ancient (circa 7/10/2012) Giraph to
giraph-release-1.0.0-RC3. Most jobs run fine but our largest job now runs
out of memory using the same AWS elastic-mapreduce configuration we have
always used. I have never tried to configure either Giraph or the AWS
Hadoop. We build