Github user maropu commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20858#discussion_r178700550
  
    --- Diff: sql/core/src/main/scala/org/apache/spark/sql/functions.scala ---
    @@ -3046,6 +3036,16 @@ object functions {
         ArrayContains(column.expr, Literal(value))
       }
     
    +  /**
    +   * Concatenates multiple input columns together into a single column.
    +   * The function works with strings, binary columns and arrays of the 
same time.
    +   *
    +   * @group collection_funcs
    +   * @since 1.5.0
    +   */
    +  @scala.annotation.varargs
    +  def concat(exprs: Column*): Column = withExpr { 
UnresolvedConcat(exprs.map(_.expr)) }
    --- End diff --
    
    If you want to use the existing `concat` to merge arrays, I feel it'd be 
better to implement a new logic to merge arrays in `Concat`. I think this 
approach could remove `UnresolvedConcat`, too. Thought?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to