Not sure I understand the question -- all jobs need to run for the recommendations to complete. It is a process with about 5 distinct mapreduces. Which one fails with an OOME? they have names, you can see in the console.
Are you giving Hadoop workers enough memory? by default they can only use like 64MB which is far too little. You need to, for example, in conf/mapred-site.xml, add a new property named “mapred.child.java.opts” with value “-Xmx1024m” to give workers up to 1GB of heap. They probably don't need that much but might as well not limit it.
