Repository: spark
Updated Branches:
  refs/heads/master 9a76f23c6 -> a1a64e358


[SPARK-21335][DOC] doc changes for disallowed un-aliased subquery use case

## What changes were proposed in this pull request?
Document a change for un-aliased subquery use case, to address the last 
question in PR #18559:
https://github.com/apache/spark/pull/18559#issuecomment-316884858

(Please fill in changes proposed in this fix)

## How was this patch tested?
 it does not affect tests.

Please review http://spark.apache.org/contributing.html before opening a pull 
request.

Author: Yuexin Zhang <zach.yx.zh...@gmail.com>

Closes #21647 from cnZach/doc_change_for_SPARK-20690_SPARK-21335.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a1a64e35
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a1a64e35
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a1a64e35

Branch: refs/heads/master
Commit: a1a64e3583cfa451b4d0d2361c1da2972a5e4444
Parents: 9a76f23
Author: Yuexin Zhang <zach.yx.zh...@gmail.com>
Authored: Wed Jun 27 16:05:36 2018 +0800
Committer: Wenchen Fan <wenc...@databricks.com>
Committed: Wed Jun 27 16:05:36 2018 +0800

----------------------------------------------------------------------
 docs/sql-programming-guide.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a1a64e35/docs/sql-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 7c4ef41..cd7329b 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -2017,6 +2017,7 @@ working with timestamps in `pandas_udf`s to get the best 
performance, see
     - Literal values used in SQL operations are converted to DECIMAL with the 
exact precision and scale needed by them.
     - The configuration `spark.sql.decimalOperations.allowPrecisionLoss` has 
been introduced. It defaults to `true`, which means the new behavior described 
here; if set to `false`, Spark uses previous rules, ie. it doesn't adjust the 
needed scale to represent the values and it returns NULL if an exact 
representation of the value is not possible.
   - In PySpark, `df.replace` does not allow to omit `value` when `to_replace` 
is not a dictionary. Previously, `value` could be omitted in the other cases 
and had `None` by default, which is counterintuitive and error-prone.
+  - Un-aliased subquery's semantic has not been well defined with confusing 
behaviors. Since Spark 2.3, we invalidate such confusing cases, for example: 
`SELECT v.i from (SELECT i FROM v)`, Spark will throw an analysis exception in 
this case because users should not be able to use the qualifier inside a 
subquery. See [SPARK-20690](https://issues.apache.org/jira/browse/SPARK-20690) 
and [SPARK-21335](https://issues.apache.org/jira/browse/SPARK-21335) for more 
details.
 
 ## Upgrading From Spark SQL 2.1 to 2.2
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to