HI Guys,
I am trying to run a few MR jobs in a succession, some of the jobs don't
need that much memory and others do. I want to be able to tell hadoop
how much memory should be allocated for the mappers of each job.
I know how to increase the memory for a mapper JVM, through the mapred xml.
Peter
If you are using oozie to launch the MR jobs you can specify the memory
requirements in the workflow action specific to each job, in the workflow
xml you are using to launch the job. If you are writing your own driver
program to launch the jobs you can still set these parameters in the job