Hi

There are 2 ways to resolve the issue.

1.Increasing the heap size, via "-Xmx1024m" (or more), or
2.Disabling the error check altogether, via "-XX:-UseGCOverheadLimit".

as per
http://stackoverflow.com/questions/5839359/java-lang-outofmemoryerror-gc-overhead-limit-exceeded

you can pass the java options to spark by updating conf/spark-defaults.conf.
spark.executor.extraJavaOptions  -XX:+PrintGCDetails -Dkey=value
-Dnumbers="one two three"


Thanks
Arush

On Thu, Jan 29, 2015 at 2:36 PM, ey-chih chow <eyc...@hotmail.com> wrote:

> Hi,
>
> I submitted a job using spark-submit and got the following exception.
> Anybody knows how to fix this?  Thanks.
>
> Ey-Chih Chow
>
> ============================================
>
> 15/01/29 08:53:10 INFO storage.BlockManagerMasterActor: Registering block
> manager ip-10-10-8-191.us-west-2.compute.internal:47722 with 6.6 GB RAM
> Exception in thread "main" java.lang.reflect.InvocationTargetException
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>         at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>         at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>         at java.lang.reflect.Method.invoke(Method.java:606)
>         at
> org.apache.spark.deploy.worker.DriverWrapper$.main(DriverWrapper.scala:40)
>         at
> org.apache.spark.deploy.worker.DriverWrapper.main(DriverWrapper.scala)
> Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
>
> org.apache.hadoop.mapreduce.lib.input.FileInputFormat.getSplits(FileInputFormat.java:265)
>         at
> org.apache.spark.rdd.NewHadoopRDD.getPartitions(NewHadoopRDD.scala:94)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>         at
>
> org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>         at org.apache.spark.rdd.MappedRDD.getPartitions(MappedRDD.scala:28)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:204)
>         at
> org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:202)
>         at scala.Option.getOrElse(Option.scala:120)
>         at org.apache.spark.rdd.RDD.partitions(RDD.scala:202)
>         at org.apache.spark.SparkContext.runJob(SparkContext.scala:1128)
>         at
>
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:935)
>         at
>
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:832)
>         at com.crowdstar.etl.ParseAndClean$.main(ParseAndClean.scala:109)
>         at com.crowdstar.etl.ParseAndClean.main(ParseAndClean.scala)
>         ... 6 more
> 15/01/29 08:54:33 INFO storage.BlockManager: Removing RDD 1
> 15/01/29 08:54:33 ERROR actor.ActorSystemImpl: exception on LARS’ timer
> thread
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
>
> akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:397)
>         at
> akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
>         at java.lang.Thread.run(Thread.java:745)
> 15/01/29 08:54:33 ERROR actor.ActorSystemImpl: Uncaught fatal error from
> thread [sparkDriver-scheduler-1] shutting down ActorSystem [sparkDriver]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
>
> akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:397)
>         at
> akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
>         at java.lang.Thread.run(Thread.java:745)
> 15/01/29 08:54:33 ERROR actor.ActorSystemImpl: exception on LARS’ timer
> thread
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
> akka.dispatch.AbstractNodeQueue.<init>(AbstractNodeQueue.java:19)
>         at
>
> akka.actor.LightArrayRevolverScheduler$TaskQueue.<init>(Scheduler.scala:431)
>         at
>
> akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:397)
>         at
> akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
>         at java.lang.Thread.run(Thread.java:745)
> 15/01/29 08:54:33 ERROR actor.ActorSystemImpl: Uncaught fatal error from
> thread [Driver-scheduler-1] shutting down ActorSystem [Driver]
> java.lang.OutOfMemoryError: GC overhead limit exceeded
>         at
> akka.dispatch.AbstractNodeQueue.<init>(AbstractNodeQueue.java:19)
>         at
>
> akka.actor.LightArrayRevolverScheduler$TaskQueue.<init>(Scheduler.scala:431)
>         at
>
> akka.actor.LightArrayRevolverScheduler$$anon$12.nextTick(Scheduler.scala:397)
>         at
> akka.actor.LightArrayRevolverScheduler$$anon$12.run(Scheduler.scala:363)
>         at java.lang.Thread.run(Thread.java:745)
> 15/01/29 08:54:33 WARN storage.BlockManagerMasterActor: Removing
> BlockManager BlockManagerId(0, ip-10-10-8-191.us-west-2.compute.internal,
> 47722, 0) with no recent heart beats: 82575ms exceeds 45000ms
> 15/01/29 08:54:33 INFO spark.ContextCleaner: Cleaned RDD 1
> 15/01/29 08:54:33 WARN util.AkkaUtils: Error sending message in 1 attempts
> akka.pattern.AskTimeoutException:
> Recipient[Actor[akka://sparkDriver/user/BlockManagerMaster#-538003375]] had
> already been terminated.
>         at
> akka.pattern.AskableActorRef$.ask$extension(AskSupport.scala:134)
>         at
> org.apache.spark.util.AkkaUtils$.askWithReply(AkkaUtils.scala:175)
>         at
>
> org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:218)
>         at
>
> org.apache.spark.storage.BlockManagerMaster.removeBroadcast(BlockManagerMaster.scala:126)
>
>
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/unknown-issue-in-submitting-a-spark-job-tp21418.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
> For additional commands, e-mail: user-h...@spark.apache.org
>
>


-- 

[image: Sigmoid Analytics] <http://htmlsig.com/www.sigmoidanalytics.com>

*Arush Kharbanda* || Technical Teamlead

ar...@sigmoidanalytics.com || www.sigmoidanalytics.com

Reply via email to