[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-12-01 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r533645049



##
File path: 
sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/optimizer/ConstantFoldingSuite.scala
##
@@ -263,4 +263,19 @@ class ConstantFoldingSuite extends PlanTest {
 
 comparePlans(optimized, correctAnswer)
   }
+
+  test("Constant folding test with sideaffects") {
+val originalQuery =
+  testRelation
+.select("size(array(assert_true(false)))")
+
+val optimized = Optimize.execute(originalQuery.analyze)
+
+val correctAnswer =

Review comment:
   nit: we can simply write `comparePlans(optimized, originalQuery.analyze)`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-12-01 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r533540575



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
##
@@ -48,6 +48,9 @@ object ConstantFolding extends Rule[LogicalPlan] {
   // object and running eval unnecessarily.
   case l: Literal => l
 
+  case Size(c: CreateArray, _) => Literal(c.children.length)
+  case Size(c: CreateMap, _) => Literal(c.children.length / 2)

Review comment:
   Seems we need a way to propagate the no-side-effect property:
   ```
   def hasNoSideEffect(e: Expression): Boolean = e match {
 case _: Attribute => true
 case _: Literal => true
 case _: NoSideEffect => e.children.forall(hasNoSideEffect)
 case _ => false
   }
   ...
   case Size(c: CreateArray, _) if c.children.forall(hasNoSideEffect) => 
Literal(c.children.length)
   case Size(c: CreateMap, _) if c.children.forall(hasNoSideEffect) => 
Literal(c.children.length / 2)
   ```
   
   `CreateStruct/Array/Map` can extend the trait `NoSideEffect`, and we can add 
more in the future.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-12-01 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r53699



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/expressions.scala
##
@@ -48,6 +48,9 @@ object ConstantFolding extends Rule[LogicalPlan] {
   // object and running eval unnecessarily.
   case l: Literal => l
 
+  case Size(c: CreateArray, _) => Literal(c.children.length)
+  case Size(c: CreateMap, _) => Literal(c.children.length / 2)

Review comment:
   good point. We should only allow attributes and literals.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-11-30 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r532447405



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##
@@ -873,24 +873,30 @@ object InferFiltersFromGenerate extends Rule[LogicalPlan] 
{
   if !e.deterministic || e.children.forall(_.foldable) => generate
 
 case generate @ Generate(g, _, false, _, _, _) if canInferFilters(g) =>
-  // Exclude child's constraints to guarantee idempotency
-  val inferredFilters = ExpressionSet(
-Seq(
-  GreaterThan(Size(g.children.head), Literal(0)),
-  IsNotNull(g.children.head)
-)
-  ) -- generate.child.constraints
-
-  if (inferredFilters.nonEmpty) {
-generate.copy(child = Filter(inferredFilters.reduce(And), 
generate.child))
-  } else {
-generate
+  g.children.head match {
+case _: CreateNonEmptyNonNullCollection =>
+  // we don't need to add filters when creating an array because we 
know its size
+  // is > 0 and its not null
+  generate
+case _ =>
+  // Exclude child's constraints to guarantee idempotency
+  val inferredFilters = ExpressionSet(
+Seq(
+  GreaterThan(Size(g.children.head), Literal(0)),
+  IsNotNull(g.children.head)

Review comment:
   The semantic of an expression is decided by the expression itself, not 
by where we create the expression.
   
   Looking at it again, I think what's 100% safe is we can turn 
`Size(CreateArray)` to `CreateArray.children.length`, e.g.
   ```
   case Size(c: CreateArray) => Literal(c.children.length)
   case Size(c: CreateMap) => Literal(c.children.length / 2)
   ```
   We can put this in rule `ConstantFolding`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-11-30 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r531413098



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##
@@ -873,24 +873,30 @@ object InferFiltersFromGenerate extends Rule[LogicalPlan] 
{
   if !e.deterministic || e.children.forall(_.foldable) => generate
 
 case generate @ Generate(g, _, false, _, _, _) if canInferFilters(g) =>
-  // Exclude child's constraints to guarantee idempotency
-  val inferredFilters = ExpressionSet(
-Seq(
-  GreaterThan(Size(g.children.head), Literal(0)),
-  IsNotNull(g.children.head)
-)
-  ) -- generate.child.constraints
-
-  if (inferredFilters.nonEmpty) {
-generate.copy(child = Filter(inferredFilters.reduce(And), 
generate.child))
-  } else {
-generate
+  g.children.head match {
+case _: CreateNonEmptyNonNullCollection =>
+  // we don't need to add filters when creating an array because we 
know its size
+  // is > 0 and its not null
+  generate
+case _ =>
+  // Exclude child's constraints to guarantee idempotency
+  val inferredFilters = ExpressionSet(
+Seq(
+  GreaterThan(Size(g.children.head), Literal(0)),
+  IsNotNull(g.children.head)

Review comment:
   I mean, this rule can add redundant predicates, and we just need another 
rule to optimize them out. This is how catalyst rules should interact: be 
orthogonal and focus on one thing.
   
   Actually, `CreateArray.nullable` is false, so `IsNotNull(CreateArray(...))` 
will be optimized to `true` already, in rule `NullPropagation`. We can probably 
update `SimplifyBinaryComparison` and add
   ```
   case GreaterThan(Size(_: CreateNonNullCollection), IntegerLiteral(0)) => 
TrueLiteral
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-11-26 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r531413076



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##
@@ -873,24 +873,30 @@ object InferFiltersFromGenerate extends Rule[LogicalPlan] 
{
   if !e.deterministic || e.children.forall(_.foldable) => generate
 
 case generate @ Generate(g, _, false, _, _, _) if canInferFilters(g) =>
-  // Exclude child's constraints to guarantee idempotency
-  val inferredFilters = ExpressionSet(
-Seq(
-  GreaterThan(Size(g.children.head), Literal(0)),
-  IsNotNull(g.children.head)
-)
-  ) -- generate.child.constraints
-
-  if (inferredFilters.nonEmpty) {
-generate.copy(child = Filter(inferredFilters.reduce(And), 
generate.child))
-  } else {
-generate
+  g.children.head match {
+case _: CreateNonEmptyNonNullCollection =>
+  // we don't need to add filters when creating an array because we 
know its size
+  // is > 0 and its not null
+  generate
+case _ =>
+  // Exclude child's constraints to guarantee idempotency
+  val inferredFilters = ExpressionSet(
+Seq(
+  GreaterThan(Size(g.children.head), Literal(0)),
+  IsNotNull(g.children.head)

Review comment:
   I mean, this rule can add redundant predicates, and we just need another 
rule to optimize them out. This is how catalyst rules should interact: be 
orthogonal and focus on one thing.
   
   Actually, `CreateArray.nullable` is false, so `IsNotNull(CreateArray(...))` 
will be optimized to `true` already, in rule `NullPropagation`. We can probably 
update `SimplifyBinaryComparison` and add
   ```
   case GreaterThan(Size(_: CreateNonNullCollection), IntegerLiteral(0)) => 
TrueLiteral
   ```

##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##
@@ -873,24 +873,30 @@ object InferFiltersFromGenerate extends Rule[LogicalPlan] 
{
   if !e.deterministic || e.children.forall(_.foldable) => generate
 
 case generate @ Generate(g, _, false, _, _, _) if canInferFilters(g) =>
-  // Exclude child's constraints to guarantee idempotency
-  val inferredFilters = ExpressionSet(
-Seq(
-  GreaterThan(Size(g.children.head), Literal(0)),
-  IsNotNull(g.children.head)
-)
-  ) -- generate.child.constraints
-
-  if (inferredFilters.nonEmpty) {
-generate.copy(child = Filter(inferredFilters.reduce(And), 
generate.child))
-  } else {
-generate
+  g.children.head match {
+case _: CreateNonEmptyNonNullCollection =>
+  // we don't need to add filters when creating an array because we 
know its size
+  // is > 0 and its not null
+  generate
+case _ =>
+  // Exclude child's constraints to guarantee idempotency
+  val inferredFilters = ExpressionSet(
+Seq(
+  GreaterThan(Size(g.children.head), Literal(0)),
+  IsNotNull(g.children.head)

Review comment:
   I mean, this rule can add redundant predicates, and we just need another 
rule to optimize them out. This is how catalyst rules should interact: be 
orthogonal and focus on one thing.
   
   Actually, `CreateArray.nullable` is false, so `IsNotNull(CreateArray(...))` 
will be optimized to `true` already, in rule `NullPropagation`. We can probably 
update `SimplifyBinaryComparison` and add
   ```
   case GreaterThan(Size(_: CreateNonNullCollection), IntegerLiteral(0)) => 
TrueLiteral
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org



[GitHub] [spark] cloud-fan commented on a change in pull request #30504: [SPARK-33544][SQL] Optimizer should not insert filter when explode with CreateArray/CreateMap

2020-11-25 Thread GitBox


cloud-fan commented on a change in pull request #30504:
URL: https://github.com/apache/spark/pull/30504#discussion_r530606734



##
File path: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/optimizer/Optimizer.scala
##
@@ -873,24 +873,30 @@ object InferFiltersFromGenerate extends Rule[LogicalPlan] 
{
   if !e.deterministic || e.children.forall(_.foldable) => generate
 
 case generate @ Generate(g, _, false, _, _, _) if canInferFilters(g) =>
-  // Exclude child's constraints to guarantee idempotency
-  val inferredFilters = ExpressionSet(
-Seq(
-  GreaterThan(Size(g.children.head), Literal(0)),
-  IsNotNull(g.children.head)
-)
-  ) -- generate.child.constraints
-
-  if (inferredFilters.nonEmpty) {
-generate.copy(child = Filter(inferredFilters.reduce(And), 
generate.child))
-  } else {
-generate
+  g.children.head match {
+case _: CreateNonEmptyNonNullCollection =>
+  // we don't need to add filters when creating an array because we 
know its size
+  // is > 0 and its not null
+  generate
+case _ =>
+  // Exclude child's constraints to guarantee idempotency
+  val inferredFilters = ExpressionSet(
+Seq(
+  GreaterThan(Size(g.children.head), Literal(0)),
+  IsNotNull(g.children.head)

Review comment:
   In general, optimizer rules should be orthogonal. For this case, I  
think a better idea is to add a new optimizer rule, which optimizes `IsNotNull` 
and size check expressions above `CreateArray/Map` into true literal.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org