I changed the property name to mapred.job.counters.limit and restarted it
again. Now it works.
Thanks,
Christian
2013/9/7 Claudio Martella claudio.marte...@gmail.com
did you restart TT and JT?
On Sat, Sep 7, 2013 at 7:09 AM, Christian Krause m...@ckrause.org wrote:
Hi,
I've increased
Sorry, it still doesn't work (I ran into a different problem before I
reached the limit).
I am using Hadoop 0.20.203.0. Is the limit of 120 counters maybe hardcoded?
Cheers
Christian
Am 09.09.2013 08:29 schrieb Christian Krause m...@ckrause.org:
I changed the property name to
If you are running out of counters, you can turn off the superstep counters
/** Use superstep counters? (boolean) */
BooleanConfOption USE_SUPERSTEP_COUNTERS =
new BooleanConfOption(giraph.useSuperstepCounters, true,
Use superstep counters? (boolean));
On 9/9/13 6:43 AM,
Thanks for the reply.
I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
getting OOM: GC limit exceeded.
Are there any particular cases the OOC will not be able to handle, or
is it supposed to work in all cases? If the latter, it might be that I
have made some configuration error.
did you extend the heap available to the mapper tasks? e.g. through
mapred.child.java.opts.
On Tue, Sep 10, 2013 at 12:50 AM, Alexander Asplund
alexaspl...@gmail.comwrote:
Thanks for the reply.
I tried setting giraph.maxPartitionsInMemory to 1, but I'm still
getting OOM: GC limit exceeded.
Really appreciate the swift responses! Thanks again.
I have not both increased mapper tasks and decreased max number of
partitions at the same time. I first did tests with increased Mapper
heap available, but reset the setting after it apparently caused
other, large volume, non-Giraph jobs to
A small note: I'm not seeing any partitions directory being formed
under _bsp, which is where I have understood that they should be
appearing.
On 9/10/13, Alexander Asplund alexaspl...@gmail.com wrote:
Really appreciate the swift responses! Thanks again.
I have not both increased mapper tasks
Alexander:
You might try turning off the GC Overhead limit
(-XX:-UseGCOverheadLimit)
Also you could turn on verbose GC logging (-verbose:gc
-Xloggc:/tmp/@taskid@.gc)
to see what is happening.
Because the OOC still has to create and destroy objects I suspect that
the heap is just
getting