Hi Rodrigo,
You have several options to increase the counters limit (for instance, to
be 600). The property name depends on the Hadoop version you use.
1. Set it system wide by changing mapred-site.xml (on EMR it should
reside on /home/hadoop/.versions/{latest}/conf). You have to restart the
jobtracker after changing the configuration:
<configuration>
...
<property>
<name>mapreduce.job.counters.limit</name>
<value>600</value>
</property>
...
</configuration>
2. Per pig script, by setting the mapreduce.job.counters.limit property
(I haven't tested:
SET mapreduce.job.counters.limit 600
Hope this helps.
P.S: If you don't want to reduce your EMR costs, don't want to handle
Hadoop but still gain its power, you can use Xplenty
<https://www.xplenty.com>, a simplified Big Data processing platform,
powered by Hadoop. Let me know if you'd like to schedule a demo.
Cheers,
Moty
On Fri, Oct 17, 2014 at 12:44 AM, Rodrigo Ferreira <[email protected]> wrote:
> And how can I do that? I'm using Pig with AWS EMR.
>
> Rodrigo.
>
> 2014-10-16 15:17 GMT-03:00 Serega Sheypak <[email protected]>:
>
> > I suppose you have toset this prop on jobtracker side
> >
> > 2014-10-16 22:03 GMT+04:00 Rodrigo Ferreira <[email protected]>:
> >
> > > Hi guys,
> > >
> > > I'm getting a "Job failed! Error - Counters Exceeded: Limit: 120" error
> > in
> > > my Pig script. I've tried to set both versions of this parameter that
> > I've
> > > found on the internet.
> > >
> > > SET mapreduce.job.counters.max 500
> > > SET mapreduce.job.counters.limit 500
> > >
> > > But I keep getting the same error. Why is this parameter not working?
> > >
> > > Thanks,
> > > Rodrigo.
> > >
> >
>