Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22408#discussion_r218052897
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
 ---
    @@ -1331,23 +1331,27 @@ case class ArrayContains(left: Expression, right: 
Expression)
       @transient private lazy val ordering: Ordering[Any] =
         TypeUtils.getInterpretedOrdering(right.dataType)
     
    -  override def inputTypes: Seq[AbstractDataType] = right.dataType match {
    -    case NullType => Seq.empty
    -    case _ => left.dataType match {
    -      case n @ ArrayType(element, _) => Seq(n, element)
    +  override def inputTypes: Seq[AbstractDataType] = {
    +    (left.dataType, right.dataType) match {
    +      case (_, NullType) => Seq.empty
    +      case (ArrayType(e1, hasNull), e2) =>
    +        TypeCoercion.findTightestCommonType(e1, e2) match {
    --- End diff --
    
    I think we have a bug in the `findTightestCommonType`. For an int and a 
decimal, `findTightestCommonType` can return a value if the decimal's precision 
is bigger than int. But it can't return a value if the decimal's precision is 
small.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to