Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20666#discussion_r170510027
  
    --- Diff: python/pyspark/sql/readwriter.py ---
    @@ -209,13 +209,15 @@ def json(self, path, schema=None, 
primitivesAsString=None, prefersDecimal=None,
             :param mode: allows a mode for dealing with corrupt records during 
parsing. If None is
                          set, it uses the default value, ``PERMISSIVE``.
     
    -                * ``PERMISSIVE`` : sets other fields to ``null`` when it 
meets a corrupted \
    -                 record, and puts the malformed string into a field 
configured by \
    -                 ``columnNameOfCorruptRecord``. To keep corrupt records, 
an user can set \
    -                 a string type field named ``columnNameOfCorruptRecord`` 
in an user-defined \
    -                 schema. If a schema does not have the field, it drops 
corrupt records during \
    -                 parsing. When inferring a schema, it implicitly adds a \
    -                 ``columnNameOfCorruptRecord`` field in an output schema.
    +                * ``PERMISSIVE`` : when it meets a corrupted record, puts 
the malformed string \
    +                  into a field configured by 
``columnNameOfCorruptRecord``, and sets other \
    +                  fields to ``null``. To keep corrupt records, an user can 
set a string type \
    +                  field named ``columnNameOfCorruptRecord`` in an 
user-defined schema. If a \
    +                  schema does not have the field, it drops corrupt records 
during parsing. \
    +                  When inferring a schema, it implicitly adds a 
``columnNameOfCorruptRecord`` \
    --- End diff --
    
    Ah I thought this:
    
    ```
    When inferring a schema, it implicitly adds a ``columnNameOfCorruptRecord`` 
field in an output schema.
    ```
    
    describes schema inference because it adds `columnNameOfCorruptRecord` 
column if malformed record was found during schema inference. I mean ..:
    
    ```scala
    scala> spark.read.json(Seq("""{"a": 1}""", """{"a":""").toDS).printSchema()
    root
     |-- _corrupt_record: string (nullable = true)
     |-- a: long (nullable = true)
    
    
    scala> spark.read.json(Seq("""{"a": 1}""").toDS).printSchema()
    root
     |-- a: long (nullable = true)
    ```
    
    but yes I think I misread it. Here we describe things mainly about 
malformed records already.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to