Repository: spark
Updated Branches:
  refs/heads/master dff440f1e -> 4131ad03f


[SPARK-22489][DOC][FOLLOWUP] Update broadcast behavior changes in migration 
section

## What changes were proposed in this pull request?

Update broadcast behavior changes in migration section.

## How was this patch tested?

N/A

Author: Yuming Wang <wgy...@gmail.com>

Closes #19858 from wangyum/SPARK-22489-migration.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/4131ad03
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/4131ad03
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/4131ad03

Branch: refs/heads/master
Commit: 4131ad03f4d2dfcfb1e166e5dfdf0752479f7340
Parents: dff440f
Author: Yuming Wang <wgy...@gmail.com>
Authored: Sun Dec 3 23:52:37 2017 -0800
Committer: gatorsmile <gatorsm...@gmail.com>
Committed: Sun Dec 3 23:52:37 2017 -0800

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 2 ++
 1 file changed, 2 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/4131ad03/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index a1b9c3b..b76be91 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1776,6 +1776,8 @@ options.
     Note that, for <b>DecimalType(38,0)*</b>, the table above intentionally 
does not cover all other combinations of scales and precisions because 
currently we only infer decimal type like `BigInteger`/`BigInt`. For example, 
1.1 is inferred as double type.
   - In PySpark, now we need Pandas 0.19.2 or upper if you want to use Pandas 
related functionalities, such as `toPandas`, `createDataFrame` from Pandas 
DataFrame, etc.
   - In PySpark, the behavior of timestamp values for Pandas related 
functionalities was changed to respect session timezone. If you want to use the 
old behavior, you need to set a configuration 
`spark.sql.execution.pandas.respectSessionTimeZone` to `False`. See 
[SPARK-22395](https://issues.apache.org/jira/browse/SPARK-22395) for details.
+ 
+ - Since Spark 2.3, when either broadcast hash join or broadcast nested loop 
join is applicable, we prefer to broadcasting the table that is explicitly 
specified in a broadcast hint. For details, see the section [Broadcast 
Hint](#broadcast-hint-for-sql-queries) and 
[SPARK-22489](https://issues.apache.org/jira/browse/SPARK-22489).
 
 ## Upgrading From Spark SQL 2.1 to 2.2
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to