Hi Liquan, Thanks. I was running this in spark-shell. I was able to resolve this issue by creating an app and then submitting it via spark-submit in yarn-client mode. I have seen this happening before as well -- submitting via spark-shell has memory issues. The same code then works fine when submitted as an app in spark-submit yarn-client mode. I am not sure whether this is due to difference between spark-shell and spark-submit or yarn vs non-yarn mode.....
Date: Wed, 24 Sep 2014 22:13:35 -0700 Subject: Re: MLUtils.loadLibSVMFile error From: liquan...@gmail.com To: ssti...@live.com CC: so...@cloudera.com; user@spark.apache.org Hi Sameer, I think there are two things that you can do1) What is your current driver-memory or executor-memory, you can try to Increate driver-memory or executor-memory to see if that solves your problem. 2) How many features in your data? Two many features may create a large number of temp objects, which may also cause GC to happen. Hope this helps!Liquan On Wed, Sep 24, 2014 at 9:50 PM, Sameer Tilak <ssti...@live.com> wrote: Hi All,I was able to solve this formatting issue. However, I have another question. When I do the following, val examples: RDD[LabeledPoint] =MLUtils.loadLibSVMFile(sc,"structured/results/data.txt") I get java.lang.OutOfMemoryError: GC overhead limit exceeded error. Is it possible to specify the number of partitions explicitly? I want to add that this dataset is sparse and is fairly small -- ~250 MB. Error log: 14/09/24 21:41:02 ERROR Executor: Exception in task ID 0java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Pattern.compile(Pattern.java:1655) at java.util.regex.Pattern.<init>(Pattern.java:1337) at java.util.regex.Pattern.compile(Pattern.java:1022) at java.lang.String.split(String.java:2313) at java.lang.String.split(String.java:2355) at scala.collection.immutable.StringLike$class.split(StringLike.scala:201) at scala.collection.immutable.StringOps.split(StringOps.scala:31) at org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:80) at org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) at org.apache.spark.rdd.RDD.iterator(RDD.scala:227) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)14/09/24 21:41:02 ERROR ExecutorUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-0,5,main]java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.regex.Pattern.compile(Pattern.java:1655) at java.util.regex.Pattern.<init>(Pattern.java:1337) at java.util.regex.Pattern.compile(Pattern.java:1022) at java.lang.String.split(String.java:2313) at java.lang.String.split(String.java:2355) at scala.collection.immutable.StringLike$class.split(StringLike.scala:201) at scala.collection.immutable.StringOps.split(StringOps.scala:31) at org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:80) at org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) at scala.collection.Iterator$class.foreach(Iterator.scala:727) at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) at org.apache.spark.rdd.RDD.iterator(RDD.scala:227) at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) at org.apache.spark.scheduler.Task.run(Task.scala:51) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > From: so...@cloudera.com > Date: Wed, 24 Sep 2014 23:07:27 +0100 > Subject: Re: MLUtils.loadLibSVMFile error > To: ssti...@live.com > > Well, why not show some of the file? it's pretty certain there is a > problem with the format. The large repeated stack trace doesn't say > anything more. > > On Wed, Sep 24, 2014 at 11:02 PM, Sameer Tilak <ssti...@live.com> wrote: > > Hi All, > > > > > > When I try to load dataset using MLUtils.loadLibSVMFile, I have the > > following problem. Any help will be greatly appreciated! > > > > > > > > Code snippet: > > > > > > import org.apache.spark.mllib.regression.LabeledPoint > > > > import org.apache.spark.mllib.util.MLUtils > > > > import org.apache.spark.rdd.RDD > > > > import org.apache.spark.mllib.regression.LinearRegressionWithSGD > > > > > > val examples: RDD[LabeledPoint] = > > MLUtils.loadLibSVMFile(sc,"structured/results/data.txt") > > > > > > stacktrace: > > > > > > 14/09/24 15:00:49 ERROR Executor: Exception in task ID 0 > > java.lang.ArrayIndexOutOfBoundsException: 1 > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:82) > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) > > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) > > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > > at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) > > at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) > > at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:227) > > at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > > at org.apache.spark.scheduler.Task.run(Task.scala:51) > > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > at java.lang.Thread.run(Thread.java:744) > > 14/09/24 15:00:49 ERROR Executor: Exception in task ID 1 > > java.lang.ArrayIndexOutOfBoundsException: 1 > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:82) > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) > > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) > > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > > at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) > > at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) > > at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:227) > > at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > > at org.apache.spark.scheduler.Task.run(Task.scala:51) > > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > at java.lang.Thread.run(Thread.java:744) > > 14/09/24 15:00:49 WARN TaskSetManager: Lost TID 0 (task 0.0:0) > > 14/09/24 15:00:49 WARN TaskSetManager: Loss was due to > > java.lang.ArrayIndexOutOfBoundsException > > java.lang.ArrayIndexOutOfBoundsException: 1 > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:82) > > at > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > at > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) > > at scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > > at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) > > at org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) > > at scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > > at scala.collection.Iterator$class.foreach(Iterator.scala:727) > > at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > > at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) > > at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) > > at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:227) > > at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > > at org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > > at org.apache.spark.scheduler.Task.run(Task.scala:51) > > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > > at > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > at > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > at java.lang.Thread.run(Thread.java:744) > > 14/09/24 15:00:49 ERROR TaskSetManager: Task 0.0:0 failed 1 times; aborting > > job > > 14/09/24 15:00:49 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks > > have all completed, from pool > > 14/09/24 15:00:49 INFO DAGScheduler: Failed to run reduce at > > MLUtils.scala:95 > > 14/09/24 15:00:49 INFO TaskSchedulerImpl: Cancelling stage 0 > > 14/09/24 15:00:49 INFO TaskSetManager: Loss was due to > > java.lang.ArrayIndexOutOfBoundsException: 1 [duplicate 1] > > 14/09/24 15:00:49 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks > > have all completed, from pool > > org.apache.spark.SparkException: Job aborted due to stage failure: Task > > 0.0:0 failed 1 times, most recent failure: Exception failure in TID 0 on > > host localhost: java.lang.ArrayIndexOutOfBoundsException: 1 > > > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:82) > > > > org.apache.spark.mllib.util.MLUtils$$anonfun$4$$anonfun$5.apply(MLUtils.scala:79) > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244) > > > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33) > > scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:108) > > > > scala.collection.TraversableLike$class.map(TraversableLike.scala:244) > > scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:108) > > > > org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:79) > > > > org.apache.spark.mllib.util.MLUtils$$anonfun$4.apply(MLUtils.scala:76) > > scala.collection.Iterator$$anon$11.next(Iterator.scala:328) > > scala.collection.Iterator$class.foreach(Iterator.scala:727) > > scala.collection.AbstractIterator.foreach(Iterator.scala:1157) > > > > scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) > > > > scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) > > org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107) > > org.apache.spark.rdd.RDD.iterator(RDD.scala:227) > > org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31) > > org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262) > > org.apache.spark.rdd.RDD.iterator(RDD.scala:229) > > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > > org.apache.spark.scheduler.Task.run(Task.scala:51) > > > > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > > > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > > java.lang.Thread.run(Thread.java:744) > > Driver stacktrace: > > at > > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033) > > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017) > > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015) > > at > > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > > at > > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015) > > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633) > > at > > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633) > > at scala.Option.foreach(Option.scala:236) > > at > > org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633) > > at > > org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207) > > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) > > at akka.actor.ActorCell.invoke(ActorCell.scala:456) > > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) > > at akka.dispatch.Mailbox.run(Mailbox.scala:219) > > at > > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) > > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > > at > > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > > at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > > at > > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) -- Liquan Pei Department of Physics University of Massachusetts Amherst