Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/8441#discussion_r37953140
  
    --- Diff: docs/sql-programming-guide.md ---
    @@ -1988,6 +2005,27 @@ options.
     
     # Migration Guide
     
    +## Upgrading From Spark SQL 1.4 to 1.5
    +
    + - Optimized execution using manually managed memory (Tungsten) is now 
enabled by default, along with
    +   code generation for expression evaluation.  These features can both be 
disabled by setting
    +   `spark.sql.tungsten.enabled` to `false.
    + - Parquet schema merging is no longer enabled by default.  It can be 
re-enabled by setting 
    +   `spark.sql.parquet.mergeSchema` to `true`.
    + - Resolution of strings to columns in python now supports using dots 
(`.`) to qualify the column or 
    +   access nested values.  For example `df['table.column.nestedField']`.  
However, this means that if 
    +   your column name contains any dots you must now escape them using 
backticks.   
    + - In-memory columnar storage partition pruning is on by default. It can 
be disabled by setting
    +   `spark.sql.inMemoryColumnarStorage.partitionPruning` to `false`.
    + - Unlimited precision decimal columns are no longer supported, instead 
Spark SQL enforces a maximum
    --- End diff --
    
    should also mention that timestamp precision is now 1us, rather than 1ns.



---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to