Github user gatorsmile commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22759#discussion_r232441937
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -706,7 +706,7 @@ data across a fixed number of buckets and can be used 
when a number of unique va
     
     [Parquet](http://parquet.io) is a columnar format that is supported by 
many other data processing systems.
     Spark SQL provides support for both reading and writing Parquet files that 
automatically preserves the schema
    -of the original data. When writing Parquet files, all columns are 
automatically converted to be nullable for
    +of the original data. When reading Parquet files, all columns are 
automatically converted to be nullable for
    --- End diff --
    
    This file has been re-org . Could you merge the latest master?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to