Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22939#discussion_r230688803
  
    --- Diff: R/pkg/R/functions.R ---
    @@ -2230,6 +2237,32 @@ setMethod("from_json", signature(x = "Column", 
schema = "characterOrstructType")
                 column(jc)
               })
     
    +#' @details
    +#' \code{schema_of_json}: Parses a JSON string and infers its schema in 
DDL format.
    +#'
    +#' @rdname column_collection_functions
    +#' @aliases schema_of_json schema_of_json,characterOrColumn-method
    +#' @examples
    +#'
    +#' \dontrun{
    +#' json <- '{"name":"Bob"}'
    +#' df <- sql("SELECT * FROM range(1)")
    +#' head(select(df, schema_of_json(json)))}
    +#' @note schema_of_json since 3.0.0
    +setMethod("schema_of_json", signature(x = "characterOrColumn"),
    +          function(x, ...) {
    +            if (class(x) == "character") {
    +              col <- callJStatic("org.apache.spark.sql.functions", "lit", 
x)
    +            } else {
    +              col <- x@jc
    --- End diff --
    
    That's actually related with Scala API. There are too many overridden 
versions of functions in `function.scala` so we're trying to reduce it. Column 
is preferred over other specific types because Column can cover other 
expression cases.. in Python or R, they can be easily supported so other types 
and column are all supported. To cut it short, for consistency with Scala API.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to