[ 
https://issues.apache.org/jira/browse/SPARK-37392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17454148#comment-17454148
 ] 

Wenchen Fan commented on SPARK-37392:
-------------------------------------

I've figured out the root cause. The problem part of the query plan is:
{code:java}
   +- Project [_1#21 AS a#106, _2#22 AS b#107, _3#23 AS c#108, _4#24 AS d#109, 
_5#25 AS e#110, _6#26 AS f#111, _7#27 AS g#112, _8#28 AS h#113, _9#29 AS i#114, 
_10#30 AS j#115, _11#31 AS k#116, _12#32 AS l#117, _13#33 AS m#118, _14#34 AS 
n#119, _15#35 AS o#120, _16#36 AS p#121, _17#37 AS q#122, _18#38 AS r#123, 
_19#39 AS s#124, _20#40 AS t#125, _21#41 AS u#126]
      +- Filter (size(array(cast(_1#21 as string), _2#22, _3#23, _4#24, _5#25, 
_6#26, _7#27, _8#28, _9#29, _10#30, _11#31, _12#32, _13#33, _14#34, _15#35, 
_16#36, _17#37, _18#38, _19#39, _20#40, _21#41), true) > 0)
         +- LogicalRDD [_1#21, _2#22, _3#23, _4#24, _5#25, _6#26, _7#27, _8#28, 
_9#29, _10#30, _11#31, _12#32, _13#33, _14#34, _15#35, _16#36, _17#37, _18#38, 
_19#39, _20#40, _21#41] {code}
When calculating the constraints of the Project, the code is
{code:java}
var allConstraints = child.constraints
projectList.foreach {
  case a @ Alias(l: Literal, _) =>
    allConstraints += EqualNullSafe(a.toAttribute, l)
  case a @ Alias(e, _) =>
    // For every alias in `projectList`, replace the reference in constraints 
by its attribute.
    allConstraints ++= allConstraints.map(_ transform {
      case expr: Expression if expr.semanticEquals(e) =>
        a.toAttribute
    })
    allConstraints += EqualNullSafe(e, a.toAttribute)
  case _ => // Don't change.
} {code}
The `allConstraints` starts with a single `size(...)` predicate, and then we 
keep doubling it for each alias in the project list, which leads to around 2^20 
predicates.

 

> It does not occur when replacing the unique integer value (1) with a string 
> value ({_}"x"{_}).

This is because we don't have the `cast(_1#21 as string)`, and can optimize 
`size(array(...))` into a constant. The optimizer rule is too conservative, and 
skips optimizing `size(array(cast(_1#21 as string), ...))` because cast may 
fail and has side effects.

> Catalyst optimizer very time-consuming and memory-intensive with some 
> "explode(array)" 
> ---------------------------------------------------------------------------------------
>
>                 Key: SPARK-37392
>                 URL: https://issues.apache.org/jira/browse/SPARK-37392
>             Project: Spark
>          Issue Type: Bug
>          Components: Optimizer
>    Affects Versions: 3.1.2, 3.2.0
>            Reporter: Francois MARTIN
>            Priority: Major
>
> The problem occurs with the simple code below:
> {code:java}
> import session.implicits._
> Seq(
>   (1, "x", "x", "x", "x", "x", "x", "x", "x", "x", "x", "x", "x", "x", "x", 
> "x", "x", "x", "x", "x", "x")
> ).toDF()
>   .checkpoint() // or save and reload to truncate lineage
>   .createOrReplaceTempView("sub")
> session.sql("""
>   SELECT
>     *
>   FROM
>   (
>     SELECT
>       EXPLODE( ARRAY( * ) ) result
>     FROM
>     (
>       SELECT
>         _1 a, _2 b, _3 c, _4 d, _5 e, _6 f, _7 g, _8 h, _9 i, _10 j, _11 k, 
> _12 l, _13 m, _14 n, _15 o, _16 p, _17 q, _18 r, _19 s, _20 t, _21 u
>       FROM
>         sub
>     )
>   )
>   WHERE
>     result != ''
>   """).show() {code}
> It takes several minutes and a very high Java heap usage, when it should be 
> immediate.
> It does not occur when replacing the unique integer value (1) with a string 
> value ({_}"x"{_}).
> All the time is spent in the _PruneFilters_ optimization rule.
> Not reproduced in Spark 2.4.1.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to