Github user viirya commented on the issue:

    https://github.com/apache/spark/pull/16998
  
    > Not really. Constraint propagation will still be enabled by default in 
Spark. The flag would just be a hammer to quickly get around issues like this 
and SPARK-17733.
    
    Yeah, of course. I meant that when you disable the flag, you would enjoy 
the optimization relying on constraint propagation.
    
    I will create another PR for this option.
    
    > I'll take a closer look at this patch but given that this PR is primarily 
introducing a data structure that keeps track of aliased constraints, is there 
a fundamental reason for changing the underlying behavior (and restricting the 
optimization space)? Can there be a simpler alternative where we can still keep 
the old semantics?
    
    I don't find an alternative fixing to keep the old semantics and not change 
the propagation structure, and also can largely improve performance at the same 
time.
    
    #16785 keeps the old semantics and not change the propagation structure, 
but it just can cut half of the running time regarding the benchmark.
    
    Adding the flag is one simpler option.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to