Github user cloud-fan commented on the issue:

    https://github.com/apache/spark/pull/22029
  
    So, the goal here is to make the behavior consistent between multi-column 
IN-subquery and multi-column normal IN for Spark.
    
    That said, I feel it's reasonable to change the behavior of `(a, b) in 
(struct_col1, struct_col2, ...)` to return null if a field is null, but it 
seems pretty weird to also apply this behavior to `input_struct_col in 
(struct_col1, struct_col2, ...)`. It's OK to treat the `(...)` syntax 
specially, but it's strange to treat struct type different.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to