Github user HyukjinKwon commented on a diff in the pull request:

    https://github.com/apache/spark/pull/20234#discussion_r160949121
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1788,12 +1788,10 @@ options.
         Note that, for <b>DecimalType(38,0)*</b>, the table above 
intentionally does not cover all other combinations of scales and precisions 
because currently we only infer decimal type like `BigInteger`/`BigInt`. For 
example, 1.1 is inferred as double type.
       - In PySpark, now we need Pandas 0.19.2 or upper if you want to use 
Pandas related functionalities, such as `toPandas`, `createDataFrame` from 
Pandas DataFrame, etc.
       - In PySpark, the behavior of timestamp values for Pandas related 
functionalities was changed to respect session timezone. If you want to use the 
old behavior, you need to set a configuration 
`spark.sql.execution.pandas.respectSessionTimeZone` to `False`. See 
[SPARK-22395](https://issues.apache.org/jira/browse/SPARK-22395) for details.
    -
    - - Since Spark 2.3, when either broadcast hash join or broadcast nested 
loop join is applicable, we prefer to broadcasting the table that is explicitly 
specified in a broadcast hint. For details, see the section [Broadcast 
Hint](#broadcast-hint-for-sql-queries) and 
[SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489).
    -
    - - Since Spark 2.3, when all inputs are binary, `functions.concat()` 
returns an output as binary. Otherwise, it returns as a string. Until Spark 
2.3, it always returns as a string despite of input types. To keep the old 
behavior, set `spark.sql.function.concatBinaryAsString` to `true`.
    -
    - - Since Spark 2.3, when all inputs are binary, SQL `elt()` returns an 
output as binary. Otherwise, it returns as a string. Until Spark 2.3, it always 
returns as a string despite of input types. To keep the old behavior, set 
`spark.sql.function.eltOutputAsString` to `true`.
    +  - In PySpark, `na.fill()` or `fillna` also accepts boolean and replaces 
NAs with booleans. In prior Spark versions, PySpark just ignores it and returns 
the original Dataset/DataFrame.  
    --- End diff --
    
    Shall we say `null` instead of `NA`? I actually think `null` is more 
correct.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to