Github user ueshin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21028#discussion_r184266872
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
 ---
    @@ -288,6 +288,114 @@ case class ArrayContains(left: Expression, right: 
Expression)
       override def prettyName: String = "array_contains"
     }
     
    +/**
    + * Checks if the two arrays contain at least one common element.
    + */
    +@ExpressionDescription(
    +  usage = "_FUNC_(a1, a2) - Returns true if a1 contains at least an 
element present also in a2.",
    +  examples = """
    +    Examples:
    +      > SELECT _FUNC_(array(1, 2, 3), array(3, 4, 5));
    +       true
    +  """, since = "2.4.0")
    +case class ArraysOverlap(left: Expression, right: Expression)
    +  extends BinaryExpression with ImplicitCastInputTypes {
    +
    +  private lazy val elementType = 
inputTypes.head.asInstanceOf[ArrayType].elementType
    +
    +  override def dataType: DataType = BooleanType
    +
    +  override def inputTypes: Seq[AbstractDataType] = left.dataType match {
    --- End diff --
    
    There are similar functions, such as `array_union`(#21061), 
`array_intersect`(#21102), `array_except`(#21103), and maybe `concat`(#20858) 
which is slightly different though, to handle two (or more) arrays with the 
same element type.
    I think we should use the same way to specify and check input types.
    
    I'd like to discuss the best way for it here or somewhere else.
    cc @kiszk @mn-mikke Do you have any suggestions?
    Also cc @gatorsmile @cloud-fan 


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to