I was able to increase the counters limit with: Counters.MAX_COUNTER_LIMIT
= 2024 (works for hadoop_1 and hadoop 1.2.1).

Then it turned out that whatever limit I set, it is always exceeded.

It turned out that for some reason IntOverwriteAggregator that
SccPhaseMasterCompute uses to propagate algorithm phase didn't work as
expected. When read from "copmutations" it had correct value, while read
from master computation it returned the old value.

I am writing a similar test, where the value to be passed only increases
and I was able to work around this issue by using Max instead of Overwrite
aggregator.

Note that I didn't tried to run it yet, these are just results from unit
tests.

btw, I'm using release-1.1.0




2015-03-01 23:42 GMT+01:00 Michał Szynkiewicz <szynkiew...@gmail.com>:

> Hi,
>
> I'm trying to run SccComputationTestInMemory and I'm
> hitting org.apache.hadoop.mapreduce.counters.LimitExceededException: Too
> many counters: 121 max=120
>
> I tried adding both
> conf.set("mapreduce.job.counters.max", Integer.toString(1024));
> and
> conf.set("mapreduce.job.counters.limit", Integer.toString(1024));
> at the begging of the test, but none of these changed the limit of
> counters.
>
> I tried -Phadoop_2 with hadoop.version=2.6.0 and 2.5.1, -Phadoop_1 with
> 1.2.1, -Phadoop_0.20.203.
>
> How can I run this test successfully?
>
> Thanks
>
> Michał
>

Reply via email to