[ https://issues.apache.org/jira/browse/SPARK-22465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16292327#comment-16292327 ]
Sujith Jay Nair commented on SPARK-22465: ----------------------------------------- Hi [~tgraves], is there a plan to resolve this behaviour of cogroup, outside of the umbrella ticket for fixing 2G limit ([SPARK-6235]). I wish to chip in if that is the case. Thank you. > Cogroup of two disproportionate RDDs could lead into 2G limit BUG > ----------------------------------------------------------------- > > Key: SPARK-22465 > URL: https://issues.apache.org/jira/browse/SPARK-22465 > Project: Spark > Issue Type: Bug > Components: Spark Core > Affects Versions: 1.0.0, 1.0.1, 1.0.2, 1.1.0, 1.1.1, 1.2.0, 1.2.1, 1.2.2, > 1.3.0, 1.3.1, 1.4.0, 1.4.1, 1.5.0, 1.5.1, 1.5.2, 1.6.0, 1.6.1, 1.6.2, 1.6.3, > 2.0.0, 2.0.1, 2.0.2, 2.1.0, 2.1.1, 2.1.2, 2.2.0 > Reporter: Amit Kumar > Priority: Critical > > While running my spark pipeline, it failed with the following exception > {noformat} > 2017-11-03 04:49:09,776 [Executor task launch worker for task 58670] ERROR > org.apache.spark.executor.Executor - Exception in task 630.0 in stage 28.0 > (TID 58670) > java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE > at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:103) > at > org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:91) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1303) > at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:105) > at > org.apache.spark.storage.BlockManager.getLocalValues(BlockManager.scala:469) > at > org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:705) > at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:285) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) > at org.apache.spark.scheduler.Task.run(Task.scala:99) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:324) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} > After debugging I found that the issue lies with how spark handles cogroup of > two RDDs. > Here is the relevant code from apache spark > {noformat} > /** > * For each key k in `this` or `other`, return a resulting RDD that > contains a tuple with the > * list of values for that key in `this` as well as `other`. > */ > def cogroup[W](other: RDD[(K, W)]): RDD[(K, (Iterable[V], Iterable[W]))] = > self.withScope { > cogroup(other, defaultPartitioner(self, other)) > } > /** > * Choose a partitioner to use for a cogroup-like operation between a > number of RDDs. > * > * If any of the RDDs already has a partitioner, choose that one. > * > * Otherwise, we use a default HashPartitioner. For the number of > partitions, if > * spark.default.parallelism is set, then we'll use the value from > SparkContext > * defaultParallelism, otherwise we'll use the max number of upstream > partitions. > * > * Unless spark.default.parallelism is set, the number of partitions will > be the > * same as the number of partitions in the largest upstream RDD, as this > should > * be least likely to cause out-of-memory errors. > * > * We use two method parameters (rdd, others) to enforce callers passing at > least 1 RDD. > */ > def defaultPartitioner(rdd: RDD[_], others: RDD[_]*): Partitioner = { > val rdds = (Seq(rdd) ++ others) > val hasPartitioner = rdds.filter(_.partitioner.exists(_.numPartitions > > 0)) > if (hasPartitioner.nonEmpty) { > hasPartitioner.maxBy(_.partitions.length).partitioner.get > } else { > if (rdd.context.conf.contains("spark.default.parallelism")) { > new HashPartitioner(rdd.context.defaultParallelism) > } else { > new HashPartitioner(rdds.map(_.partitions.length).max) > } > } > } > {noformat} > Given this suppose we have two pair RDDs. > RDD1 : A small RDD which fewer data and partitions > RDD2: A huge RDD which has loads of data and partitions > Now in the code if we were to have a cogroup > {noformat} > val RDD3 = RDD1.cogroup(RDD2) > {noformat} > there is a case where this could lead to the SPARK-6235 Bug which is If RDD1 > has a partitioner when it is being called into a cogroup. This is because the > cogroups partitions are then decided by the partitioner and could lead to the > huge RDD2 being shuffled into a small number of partitions. > One way is probably to add a safety check here that would ignore the > partitioner if the number of partitions on the two RDDs are very different in > magnitude. -- This message was sent by Atlassian JIRA (v6.4.14#64029) --------------------------------------------------------------------- To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org For additional commands, e-mail: issues-h...@spark.apache.org