Github user mgaido91 commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21028#discussion_r186690973
  
    --- Diff: 
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala
 ---
    @@ -28,6 +30,34 @@ import org.apache.spark.unsafe.Platform
     import org.apache.spark.unsafe.array.ByteArrayMethods
     import org.apache.spark.unsafe.types.{ByteArray, UTF8String}
     
    +/**
    + * Base trait for [[BinaryExpression]]s with two arrays of the same 
element type and implicit
    + * casting.
    + */
    +trait BinaryArrayExpressionWithImplicitCast extends BinaryExpression
    +  with ImplicitCastInputTypes {
    +
    +  protected lazy val elementType: DataType = 
inputTypes.head.asInstanceOf[ArrayType].elementType
    +
    +  override def inputTypes: Seq[AbstractDataType] = {
    +    TypeCoercion.findWiderTypeForTwo(left.dataType, right.dataType) match {
    --- End diff --
    
    good question. Checking the way we are doing it I would say no. Since we 
are bounding in a quite strange way at the moment (causing loss of int digits 
instead of decimals) I would say no, since this could lead to have many 
`NULL`s. What do you think?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to