Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/21169#discussion_r184597402
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1805,12 +1805,13 @@ working with timestamps in `pandas_udf`s to get the 
best performance, see
     
       - Since Spark 2.4, Spark maximizes the usage of a vectorized ORC reader 
for ORC files by default. To do that, `spark.sql.orc.impl` and 
`spark.sql.orc.filterPushdown` change their default values to `native` and 
`true` respectively.
       - In PySpark, when Arrow optimization is enabled, previously `toPandas` 
just failed when Arrow optimization is unable to be used whereas 
`createDataFrame` from Pandas DataFrame allowed the fallback to 
non-optimization. Now, both `toPandas` and `createDataFrame` from Pandas 
DataFrame allow the fallback by default, which can be switched off by 
`spark.sql.execution.arrow.fallback.enabled`.
    - - Since Spark 2.4, writing an empty dataframe to a directory launches at 
least one write task, even if physically the dataframe has no partition. This 
introduces a small behavior change that for self-describing file formats like 
Parquet and Orc, Spark creates a metadata-only file in the target directory 
when writing a 0-partition dataframe, so that schema inference can still work 
if users read that directory later. The new behavior is more reasonable and 
more consistent regarding writing empty dataframe.
    - - Since Spark 2.4, expression IDs in UDF arguments do not appear in 
column names. For example, an column name in Spark 2.4 is not `UDF:f(col0 AS 
colA#28)` but ``UDF:f(col0 AS `colA`)``.
    - - Since Spark 2.4, writing a dataframe with an empty or nested empty 
schema using any file formats (parquet, orc, json, text, csv etc.) is not 
allowed. An exception is thrown when attempting to write dataframes with empty 
schema. 
    - - Since Spark 2.4, Spark compares a DATE type with a TIMESTAMP type after 
promotes both sides to TIMESTAMP. To set `false` to 
`spark.sql.hive.compareDateTimestampInTimestamp` restores the previous 
behavior. This option will be removed in Spark 3.0.
    - - Since Spark 2.4, creating a managed table with nonempty location is not 
allowed. An exception is thrown when attempting to create a managed table with 
nonempty location. To set `true` to 
`spark.sql.allowCreatingManagedTableUsingNonemptyLocation` restores the 
previous behavior. This option will be removed in Spark 3.0.
    - - Since Spark 2.4, the type coercion rules can automatically promote the 
argument types of the variadic SQL functions (e.g., IN/COALESCE) to the widest 
common type, no matter how the input arguments order. In prior Spark versions, 
the promotion could fail in some specific orders (e.g., TimestampType, 
IntegerType and StringType) and throw an exception.
    +  - Since Spark 2.4, writing an empty dataframe to a directory launches at 
least one write task, even if physically the dataframe has no partition. This 
introduces a small behavior change that for self-describing file formats like 
Parquet and Orc, Spark creates a metadata-only file in the target directory 
when writing a 0-partition dataframe, so that schema inference can still work 
if users read that directory later. The new behavior is more reasonable and 
more consistent regarding writing empty dataframe.
    +  - Since Spark 2.4, expression IDs in UDF arguments do not appear in 
column names. For example, an column name in Spark 2.4 is not `UDF:f(col0 AS 
colA#28)` but ``UDF:f(col0 AS `colA`)``.
    +  - Since Spark 2.4, writing a dataframe with an empty or nested empty 
schema using any file formats (parquet, orc, json, text, csv etc.) is not 
allowed. An exception is thrown when attempting to write dataframes with empty 
schema.
    --- End diff --
    
    like `new StructType("empty", new StructType())`, the table has a column, 
the column is struct type but has 0 fields. This schema is invalid to write out.
    
    Anyway this is an existing comment and I just fixed its indentation.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to