[ https://issues.apache.org/jira/browse/SPARK-2251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Patrick Wendell resolved SPARK-2251. ------------------------------------ Resolution: Fixed Fix Version/s: 1.1.0 Issue resolved by pull request 1229 [https://github.com/apache/spark/pull/1229] > MLLib Naive Bayes Example SparkException: Can only zip RDDs with same number > of elements in each partition > ---------------------------------------------------------------------------------------------------------- > > Key: SPARK-2251 > URL: https://issues.apache.org/jira/browse/SPARK-2251 > Project: Spark > Issue Type: Bug > Components: MLlib > Affects Versions: 1.0.0 > Environment: OS: Fedora Linux > Spark Version: 1.0.0. Git clone from the Spark Repository > Reporter: Jun Xie > Assignee: Xiangrui Meng > Priority: Minor > Labels: Naive-Bayes > Fix For: 1.0.1, 1.1.0 > > > I follow the exact code from Naive Bayes Example > (http://spark.apache.org/docs/latest/mllib-naive-bayes.html) of MLLib. > When I executed the final command: > val accuracy = 1.0 * predictionAndLabel.filter(x => x._1 == x._2).count() / > test.count() > It complains "Can only zip RDDs with same number of elements in each > partition". > I got the following exception: > {code} > 14/06/23 19:39:23 INFO SparkContext: Starting job: count at <console>:31 > 14/06/23 19:39:23 INFO DAGScheduler: Got job 3 (count at <console>:31) with 2 > output partitions (allowLocal=false) > 14/06/23 19:39:23 INFO DAGScheduler: Final stage: Stage 4(count at > <console>:31) > 14/06/23 19:39:23 INFO DAGScheduler: Parents of final stage: List() > 14/06/23 19:39:23 INFO DAGScheduler: Missing parents: List() > 14/06/23 19:39:23 INFO DAGScheduler: Submitting Stage 4 (FilteredRDD[14] at > filter at <console>:31), which has no missing parents > 14/06/23 19:39:23 INFO DAGScheduler: Submitting 2 missing tasks from Stage 4 > (FilteredRDD[14] at filter at <console>:31) > 14/06/23 19:39:23 INFO TaskSchedulerImpl: Adding task set 4.0 with 2 tasks > 14/06/23 19:39:23 INFO TaskSetManager: Starting task 4.0:0 as TID 8 on > executor localhost: localhost (PROCESS_LOCAL) > 14/06/23 19:39:23 INFO TaskSetManager: Serialized task 4.0:0 as 3410 bytes in > 0 ms > 14/06/23 19:39:23 INFO TaskSetManager: Starting task 4.0:1 as TID 9 on > executor localhost: localhost (PROCESS_LOCAL) > 14/06/23 19:39:23 INFO TaskSetManager: Serialized task 4.0:1 as 3410 bytes in > 1 ms > 14/06/23 19:39:23 INFO Executor: Running task ID 8 > 14/06/23 19:39:23 INFO Executor: Running task ID 9 > 14/06/23 19:39:23 INFO BlockManager: Found block broadcast_0 locally > 14/06/23 19:39:23 INFO BlockManager: Found block broadcast_0 locally > 14/06/23 19:39:23 INFO HadoopRDD: Input split: > file:/home/jun/open_source/spark/mllib/data/sample_naive_bayes_data.txt:0+24 > 14/06/23 19:39:23 INFO HadoopRDD: Input split: > file:/home/jun/open_source/spark/mllib/data/sample_naive_bayes_data.txt:24+24 > 14/06/23 19:39:23 INFO HadoopRDD: Input split: > file:/home/jun/open_source/spark/mllib/data/sample_naive_bayes_data.txt:0+24 > 14/06/23 19:39:23 INFO HadoopRDD: Input split: > file:/home/jun/open_source/spark/mllib/data/sample_naive_bayes_data.txt:24+24 > 14/06/23 19:39:23 ERROR Executor: Exception in task ID 9 > org.apache.spark.SparkException: Can only zip RDDs with same number of > elements in each partition > at > org.apache.spark.rdd.RDD$$anonfun$zip$1$$anon$1.hasNext(RDD.scala:663) > at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) > at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1067) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > at org.apache.spark.scheduler.Task.run(Task.scala:51) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > 14/06/23 19:39:23 ERROR Executor: Exception in task ID 8 > org.apache.spark.SparkException: Can only zip RDDs with same number of > elements in each partition > at > org.apache.spark.rdd.RDD$$anonfun$zip$1$$anon$1.hasNext(RDD.scala:663) > at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) > at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1067) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > at org.apache.spark.scheduler.Task.run(Task.scala:51) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > 14/06/23 19:39:23 WARN TaskSetManager: Lost TID 8 (task 4.0:0) > 14/06/23 19:39:23 WARN TaskSetManager: Loss was due to > org.apache.spark.SparkException > org.apache.spark.SparkException: Can only zip RDDs with same number of > elements in each partition > at > org.apache.spark.rdd.RDD$$anonfun$zip$1$$anon$1.hasNext(RDD.scala:663) > at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) > at org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1067) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > at org.apache.spark.scheduler.Task.run(Task.scala:51) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > at java.lang.Thread.run(Thread.java:724) > 14/06/23 19:39:23 ERROR TaskSetManager: Task 4.0:0 failed 1 times; aborting > job > 14/06/23 19:39:23 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks > have all completed, from pool > 14/06/23 19:39:23 INFO DAGScheduler: Failed to run count at <console>:31 > 14/06/23 19:39:23 INFO TaskSetManager: Loss was due to > org.apache.spark.SparkException: Can only zip RDDs with same number of > elements in each partition [duplicate 1] > 14/06/23 19:39:23 INFO TaskSchedulerImpl: Removed TaskSet 4.0, whose tasks > have all completed, from pool > 14/06/23 19:39:23 INFO TaskSchedulerImpl: Cancelling stage 4 > org.apache.spark.SparkException: Job aborted due to stage failure: Task 4.0:0 > failed 1 times, most recent failure: Exception failure in TID 8 on host > localhost: org.apache.spark.SparkException: Can only zip RDDs with same > number of elements in each partition > org.apache.spark.rdd.RDD$$anonfun$zip$1$$anon$1.hasNext(RDD.scala:663) > scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) > org.apache.spark.util.Utils$.getIteratorSize(Utils.scala:1067) > org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > org.apache.spark.rdd.RDD$$anonfun$count$1.apply(RDD.scala:858) > > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > > org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1079) > org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) > org.apache.spark.scheduler.Task.run(Task.scala:51) > org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) > > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) > java.lang.Thread.run(Thread.java:724) > Driver stacktrace: > at > org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1038) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1022) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1020) > at > scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) > at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) > at > org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1020) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:638) > at > org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:638) > at scala.Option.foreach(Option.scala:236) > at > org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:638) > at > org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1212) > at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) > at akka.actor.ActorCell.invoke(ActorCell.scala:456) > at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) > at akka.dispatch.Mailbox.run(Mailbox.scala:219) > at > akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386) > at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) > at > scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) > at > scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) > at > scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) > {code} -- This message was sent by Atlassian JIRA (v6.2#6252)