HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't need that much memory and others do. I want to be able to tell hadoop how much memory should be allocated for the mappers of each job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
I tried manually setting the mapreduce.reduce.java.opts= -Xmx<someNumber>m , but wasn't picked up by the mapper jvm, the global setting was always been picked up .

In summation
Job 1 - Mappers need only 250 Mg of Ram
Job2 - Mapper
           Reducer need around - 2Gb

I don't want to be able to set those restrictions prior to submitting the job to my hadoop cluster.

Reply via email to