I have problems with reduce tasks failing with GC overhead limit exceeded
My reduce job retains a small amount of data in memory while processing each
key discarding it after the
key is handled
My *mapred.child.java.opts is *-Xmx1200m
I tried
 mapred.job.shuffle.input.buffer.percent = 0.20
 mapred.job.reduce.input.buffer.percent=0.30

I really don't know what parameters I can set to lower the memory footprint
of my reducer and could use help

I am only passing tens of thousands of keys with thousands of values - each
value will be maybe 10KB


-- 
Steven M. Lewis PhD
4221 105th Ave NE
Kirkland, WA 98033
206-384-1340 (cell)
Skype lordjoe_com

Reply via email to