[ 
https://issues.apache.org/jira/browse/SPARK-36903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17426075#comment-17426075
 ] 

JacobZheng commented on SPARK-36903:
------------------------------------

Yes. The same issue occurs on Spark 3.1. Actually, I'm facing a corner case. 
I'm running a sql with thousands of case when branches. Due to the complex 
calculation logic and excessive case when branches, codegen will generate 
million+ line  code block. I just want a way to limit code generation.

> oom exception occurred during code generation due to a large number of case 
> when branches
> -----------------------------------------------------------------------------------------
>
>                 Key: SPARK-36903
>                 URL: https://issues.apache.org/jira/browse/SPARK-36903
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 3.0.1
>            Reporter: JacobZheng
>            Priority: Major
>
> I have a spark task that contains many case when branches. When I run it, the 
> driver throws an oom exception in the codegen phase. What I expect is if it 
> is possible to detect or limit it in the codegen phase to avoid this.
>  
> I see that spark 2.2 has a configuration item 
> spark.sql.codegen.maxCaseBranches. would it help my situation if I try to add 
> this limit back?
>  
> This is the stack information I see via jstack
> {code:java}
> "SparkJobEngine-akka.actor.default-dispatcher-9" #23010 prio=5 os_prio=0 
> cpu=197487.25ms elapsed=7213.71s tid=0x00007fb08c019800 nid=0x5fb9 runnable 
> [0x00007fb072af2000] java.lang.Thread.State: RUNNABLE at 
> scala.collection.immutable.StringLike$$Lambda$1790/0x0000000840ee4840.apply(Unknown
>  Source) at scala.collection.Iterator.foreach(Iterator.scala:941) at 
> scala.collection.Iterator.foreach$(Iterator.scala:941) at 
> scala.collection.AbstractIterator.foreach(Iterator.scala:1429) at 
> scala.collection.immutable.StringLike.stripMargin(StringLike.scala:187) at 
> scala.collection.immutable.StringLike.stripMargin$(StringLike.scala:185) at 
> scala.collection.immutable.StringOps.stripMargin(StringOps.scala:33) at 
> org.apache.spark.sql.catalyst.expressions.codegen.Block.toString(javaCode.scala:142)
>  at 
> org.apache.spark.sql.catalyst.expressions.codegen.Block.toString$(javaCode.scala:141)
>  at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeBlock.toString(javaCode.scala:286)
>  at 
> org.apache.spark.sql.catalyst.expressions.codegen.Block.length(javaCode.scala:149)
>  at 
> org.apache.spark.sql.catalyst.expressions.codegen.Block.length$(javaCode.scala:149)
>  at 
> org.apache.spark.sql.catalyst.expressions.codegen.CodeBlock.length(javaCode.scala:286)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression.reduceCodeSize(Expression.scala:160)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:147)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression$$Lambda$2784/0x000000084131b840.apply(Unknown
>  Source) at scala.Option.getOrElse(Option.scala:189) at 
> org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
>  at 
> org.apache.spark.sql.catalyst.expressions.And.doGenCode(predicates.scala:567) 
> at 
> org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression$$Lambda$2784/0x000000084131b840.apply(Unknown
>  Source) at scala.Option.getOrElse(Option.scala:189) at 
> org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
>  at 
> org.apache.spark.sql.catalyst.expressions.CaseWhen.$anonfun$multiBranchesCodegen$1(conditionalExpressions.scala:209)
>  at 
> org.apache.spark.sql.catalyst.expressions.CaseWhen$$Lambda$4626/0x00000008415b8840.apply(Unknown
>  Source) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) at 
> scala.collection.TraversableLike$$Lambda$83/0x00000008401bc040.apply(Unknown 
> Source) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:238) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:231) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> org.apache.spark.sql.catalyst.expressions.CaseWhen.multiBranchesCodegen(conditionalExpressions.scala:208)
>  at 
> org.apache.spark.sql.catalyst.expressions.CaseWhen.doGenCode(conditionalExpressions.scala:291)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression$$Lambda$2784/0x000000084131b840.apply(Unknown
>  Source) at scala.Option.getOrElse(Option.scala:189) at 
> org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
>  at 
> org.apache.spark.sql.catalyst.expressions.Concat.$anonfun$doGenCode$22(collectionOperations.scala:2120)
>  at 
> org.apache.spark.sql.catalyst.expressions.Concat$$Lambda$5022/0x0000000841a60840.apply(Unknown
>  Source) at 
> scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:238) at 
> scala.collection.TraversableLike$$Lambda$83/0x00000008401bc040.apply(Unknown 
> Source) at 
> scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62) at 
> scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55) at 
> scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49) at 
> scala.collection.TraversableLike.map(TraversableLike.scala:238) at 
> scala.collection.TraversableLike.map$(TraversableLike.scala:231) at 
> scala.collection.AbstractTraversable.map(Traversable.scala:108) at 
> org.apache.spark.sql.catalyst.expressions.Concat.doGenCode(collectionOperations.scala:2120)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression$$Lambda$2784/0x000000084131b840.apply(Unknown
>  Source) at scala.Option.getOrElse(Option.scala:189) at 
> org.apache.spark.sql.catalyst.expressions.Expression.genCode(Expression.scala:141)
>  at 
> org.apache.spark.sql.catalyst.expressions.BinaryExpression.nullSafeCodeGen(Expression.scala:593)
>  at 
> org.apache.spark.sql.catalyst.expressions.BinaryExpression.defineCodeGen(Expression.scala:576)
>  at 
> org.apache.spark.sql.catalyst.expressions.EqualTo.doGenCode(predicates.scala:750)
>  at 
> org.apache.spark.sql.catalyst.expressions.Expression.$anonfun$genCode$3(Expression.scala:146)
>  at ......{code}
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to