Github user patrickmcgloin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21671#discussion_r199330240
  
    --- Diff: python/pyspark/sql/functions.py ---
    @@ -2163,9 +2163,9 @@ def json_tuple(col, *fields):
     @since(2.1)
     def from_json(col, schema, options={}):
         """
    -    Parses a column containing a JSON string into a :class:`MapType` with 
:class:`StringType`
    -    as keys type, :class:`StructType` or :class:`ArrayType` of 
:class:`StructType`\\s with
    -    the specified schema. Returns `null`, in the case of an unparseable 
string.
    +    Parses a column containing a JSON string into a :class:`MapType`, 
:class:`StructType`
    +    or :class:`ArrayType` of :class:`StructType`\\s with the specified 
schema. Returns
    +    `null`, in the case of an unparseable string.
    --- End diff --
    
    I think that other basic types (e.g Int, Long, etc) were already supported 
along with String.  As long as they can be directly converted from String to 
that type specified in the schema then they were ok.  What we have added here 
is parsing on the DateType and TimestampType.  I don't think complex types 
would ever be supported.  And I don't think other basic types need the parsing 
that date/times do.
    
    Do you think we should specify which types are supported as keys?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to