Hi

Please help!

 When running random forest training phase in cluster mode, I got GC
overhead limit exceeded.

I have used two parameters when submitting the job to cluster

--driver-memory 64g \

--executor-memory 8g \

My Current settings:

(spark-defaults.conf)

spark.executor.memory           8g

(spark-env.sh)

export SPARK_WORKER_MEMORY=8g

export HADOOP_HEAPSIZE=8000


Any idea how to resolve it?

Regards






###  (the erro log) ###

16/07/23 04:34:04 WARN TaskSetManager: Lost task 2.0 in stage 6.1 (TID 30,
n1794): java.lang.OutOfMemoryError: GC overhead limit exceeded

        at
scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:138)

        at
scala.reflect.ManifestFactory$$anon$12.newArray(Manifest.scala:136)

        at
org.apache.spark.util.collection.CompactBuffer.growToSize(CompactBuffer.scala:144)

        at
org.apache.spark.util.collection.CompactBuffer.$plus$plus$eq(CompactBuffer.scala:90)

        at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$1$$anonfun$10.apply(PairRDDFunctions.scala:505)

        at
org.apache.spark.rdd.PairRDDFunctions$$anonfun$groupByKey$1$$anonfun$10.apply(PairRDDFunctions.scala:505)

        at
org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.mergeIfKeyExists(ExternalAppendOnlyMap.scala:318)

        at
org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:365)

        at
org.apache.spark.util.collection.ExternalAppendOnlyMap$ExternalIterator.next(ExternalAppendOnlyMap.scala:265)

        at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)

        at scala.collection.Iterator$class.foreach(Iterator.scala:727)

        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)

        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)

        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)

        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47)

        at scala.collection.TraversableOnce$class.to
(TraversableOnce.scala:273)

        at scala.collection.AbstractIterator.to(Iterator.scala:1157)

        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265)

        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157)

        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252)

        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157)

        at
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)

        at
org.apache.spark.rdd.RDD$$anonfun$collect$1$$anonfun$12.apply(RDD.scala:927)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)

        at
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1858)

        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)

        at org.apache.spark.scheduler.Task.run(Task.scala:89)

        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)

        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

        at java.lang.Thread.run(Thread.java:745)

Reply via email to