My fetch cycle failed on the following initial error :

java.io.IOException: Task process exit with nonzero status of 65.
        at org.apache.hadoop.mapred.TaskRunner.run(TaskRunner.java:425)

Than it makes a second attempt and after 3 hours I bump on that error
(altough I had double HADOOP_HEAPSIZE):

java.lang.OutOfMemoryError: GC overhead limit exceeded


Any idea what the initial error is or could be ?
For the second one, I'm going to reduce number of threads... but I'm
wondering if there could be a memory leak ? And I don't how to trace that.

-- 
-MilleBii-

Reply via email to