I suspect that the reason no-one is responding with good answers is that
fundamentally, it seems like what you are trying to do runs against the
reason Hadoop is designed the way it is. A parallel process framework is
defeated if you force it to not work concurrently...
Maybe you should look into
hi,
I have a cluster of 7 nodes. Every node has 2 maps-lots and 1 reduce slot.
Is it possible to force the jobtracker executing only 2 map jobs or 1
reduce job per time? I have found this configuration option:
mapred.reduce.slowstart.completed.maps. I think this will do exactly what
I want If I
Why do you want to do this?
+Vinod
On Nov 5, 2013, at 9:17 AM, John wrote:
Is it possible to force the jobtracker executing only 2 map jobs or 1 reduce
job per time?
--
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to
which it is
Because my node swaps the memory if the 2 map slots + 1 reduce is occupied
with my job. Sure I can minimize the max memory for the map/reduce process.
I tried this already, but I got a out of memory exception if set the max
heap size for the map/reduce process to low for my mr job.
kind regards