If you are using EMR you need to set this on startup you can't change it on the 
fly...

Sent from a remote device. Please excuse any typos...

Mike Segel

On Jul 2, 2013, at 12:24 PM, Jean-Marc Spaggiari <jean-m...@spaggiari.org> 
wrote:

> Hi Glen,
> 
> You don't need to recompile to change this limit...
> 
> Take a look there:
> http://stackoverflow.com/questions/12140177/more-than-120-counters-in-hadoop
> 
> JM
> 
> 2013/7/2 Glen Arrowsmith <garrowsm...@halfbrick.com>:
>> Hi,
>> I'm getting an error on a map reduce task that use to work just fine for a 
>> few weeks.
>> 
>> Exceeded limits on number of counters - Counters=120 Limit=120
>> 
>> The full stderr output is at the bottom.
>> 
>> I'm using Amazon's Elastic MapReduce.
>> The following command starts the job
>> elastic-mapreduce --create --name REGISTER table to S3 v2" --num-instances 6 
>> --with-supported-products mapr-m5 --instance-type m1.xlarge --hive-script 
>> --arg s3://censored/dynamo-to-s3-v2.h --args 
>> -d,OUTPATH=s3://censored/out/,-d,INTABLE="REGISTER"
>> 
>> From what I've read you can't change the counter limit without recompiling.
>> 
>> Originally I had "fixed" this problem by upgrading from standard map reduce 
>> instances to mapr-m5 instances but that stopped working now for some reason.
>> 
>> Thanks very much in advance for your help
>> 
>> Glen Arrowsmith
>> Systems Architect
>> 
>> 
>> /mnt/var/lib/hadoop/steps/2/./hive-script:326: warning: Insecure world 
>> writable dir /home/hadoop/bin in PATH, mode 040757
>> Logging initialized using configuration in 
>> file:/home/hadoop/.versions/hive-0.8.1/conf/hive-log4j.properties
>> Hive history 
>> file=/mnt/var/lib/hive_081/tmp/history/hive_job_log_hadoop_201307020009_133883985.txt
>> OK
>> [snip]
>> Time taken: 0.389 seconds
>> OK
>> Time taken: 0.382 seconds
>> Total MapReduce jobs = 12
>> Launching Job 1 out of 12
>> Number of reduce tasks not specified. Defaulting to jobconf value of: 10
>> In order to change the average load for a reducer (in bytes):
>>  set hive.exec.reducers.bytes.per.reducer=<number>
>> In order to limit the maximum number of reducers:
>>  set hive.exec.reducers.max=<number>
>> In order to set a constant number of reducers:
>>  set mapred.reduce.tasks=<number>
>> Starting Job = job_201307020007_0001, Tracking URL = 
>> http://ip-10-151-78-231.ec2.internal:9100/jobdetails.jsp?jobid=job_201307020007_0001
>> Kill Command = /opt/mapr/hadoop/hadoop-0.20.2/bin/../bin/hadoop job  
>> -Dmapred.job.tracker=maprfs:/// -kill job_201307020007_0001
>> Hadoop job information for Stage-12: number of mappers: 23; number of 
>> reducers: 10
>> 2013-07-02 00:09:30,325 Stage-12 map = 0%,  reduce = 0%
>> org.apache.hadoop.mapred.Counters$CountersExceededException: Error: Exceeded 
>> limits on number of counters - Counters=120 Limit=120
>>                at 
>> org.apache.hadoop.mapred.Counters$Group.getCounterForName(Counters.java:318)
>>                at 
>> org.apache.hadoop.mapred.Counters.findCounter(Counters.java:439)
>>                at 
>> org.apache.hadoop.mapred.Counters.getCounter(Counters.java:503)
>>                at 
>> org.apache.hadoop.hive.ql.exec.Operator.updateCounters(Operator.java:1150)
>>                at 
>> org.apache.hadoop.hive.ql.exec.ExecDriver.updateCounters(ExecDriver.java:1281)
>>                at 
>> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.updateCounters(HadoopJobExecHelper.java:85)
>>                at 
>> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:312)
>>                at 
>> org.apache.hadoop.hive.ql.exec.HadoopJobExecHelper.progress(HadoopJobExecHelper.java:685)
>>                at 
>> org.apache.hadoop.hive.ql.exec.ExecDriver.execute(ExecDriver.java:494)
>>                at 
>> org.apache.hadoop.hive.ql.exec.MapRedTask.execute(MapRedTask.java:136)
>>                at 
>> org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:133)
>>                at 
>> org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:57)
>>                at 
>> org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:47)
>> Ended Job = job_201307020007_0001 with exception 
>> 'org.apache.hadoop.mapred.Counters$CountersExceededException(Error: Exceeded 
>> limits on number of counters - Counters=120 Limit=120)'
>> FAILED: Execution Error, return code 1 from 
>> org.apache.hadoop.hive.ql.exec.MapRedTask
>> Command exiting with ret '255'
> 

Reply via email to