Github user cloud-fan commented on a diff in the pull request:

    https://github.com/apache/spark/pull/22814#discussion_r228164087
  
    --- Diff: docs/sql-migration-guide-upgrade.md ---
    @@ -11,6 +11,10 @@ displayTitle: Spark SQL Upgrading Guide
     
       - In PySpark, when creating a `SparkSession` with 
`SparkSession.builder.getOrCreate()`, if there is an existing `SparkContext`, 
the builder was trying to update the `SparkConf` of the existing `SparkContext` 
with configurations specified to the builder, but the `SparkContext` is shared 
by all `SparkSession`s, so we should not update them. Since 3.0, the builder 
comes to not update the configurations. This is the same behavior as Java/Scala 
API in 2.3 and above. If you want to update them, you need to update them prior 
to creating a `SparkSession`.
     
    +  - In Avro data source, the function `from_avro` supports following parse 
modes:
    +    * `PERMISSIVE`: Corrupt records are processed as null result. To 
implement this, the data schema is forced to be fully nullable, which might be 
different from the one user provided. This is the default mode.
    +    * `FAILFAST`: Throws an exception on processing corrupted record.
    --- End diff --
    
    Let's explain what changes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to