Hi all,

I have a hadoop cluster with 4 dataNodes+nodeManager and 1 namenode+resourceManager. I'm launching a MR job (identity mapper and identity reducer) with the relevant memory settings set to appropriate values : mapreduce.[map|reduce].memory.mb, JAVA_CHILD_OPTS, map sort buffer, reduce buffer, ...

Does the framework guarantee that I will not run into a "Out of memory" situation for any input dataset size ? i.e. The only things that can lead to a "Out of memory" on mappers or reducers are :
Bad memory settings (for example map sort buffer > mapreduce.map.memory.mb )
Bad Mapper/Reducer code (user code)

Best Regards,



Reply via email to