This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch branch-3.4
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.4 by this push:
     new 51070a69c3c [SPARK-42281][PYTHON][DOCS] Update Debugging PySpark 
documents to show error message properly
51070a69c3c is described below

commit 51070a69c3c725361df955f174669e3b0e9d5793
Author: itholic <haejoon....@databricks.com>
AuthorDate: Thu Feb 2 18:05:29 2023 +0900

    [SPARK-42281][PYTHON][DOCS] Update Debugging PySpark documents to show 
error message properly
    
    This PR proposes to update examples in [Debugging 
PySpark](https://spark.apache.org/docs/latest/api/python/development/debugging.html#debugging-pyspark).
    
    Because the examples described in [Debugging 
PySpark](https://spark.apache.org/docs/latest/api/python/development/debugging.html#debugging-pyspark)
 don't match with current PySpark error framework.
    
    Document update
    
    manually test by building the document in local.
    
    Closes #39852 from itholic/fix_develop_doc.
    
    Authored-by: itholic <haejoon....@databricks.com>
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
    (cherry picked from commit 8c68fc77566e641dcb196de5bdec91215b00e44a)
    Signed-off-by: Hyukjin Kwon <gurwls...@apache.org>
---
 python/docs/source/development/debugging.rst | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/python/docs/source/development/debugging.rst 
b/python/docs/source/development/debugging.rst
index ba656294ef4..a188d3f3e78 100644
--- a/python/docs/source/development/debugging.rst
+++ b/python/docs/source/development/debugging.rst
@@ -411,7 +411,7 @@ Example:
     >>> df['bad_key']
     Traceback (most recent call last):
     ...
-    pyspark.sql.utils.AnalysisException: Cannot resolve column name "bad_key" 
among (id)
+    pyspark.errors.exceptions.AnalysisException: Cannot resolve column name 
"bad_key" among (id)
 
 Solution:
 
@@ -431,8 +431,9 @@ Example:
     >>> spark.sql("select * 1")
     Traceback (most recent call last):
     ...
-    pyspark.sql.utils.ParseException:
-    Syntax error at or near '1': extra input '1'(line 1, pos 9)
+    pyspark.errors.exceptions.ParseException:
+    [PARSE_SYNTAX_ERROR] Syntax error at or near '1': extra input '1'.(line 1, 
pos 9)
+
     == SQL ==
     select * 1
     ---------^^^
@@ -455,7 +456,7 @@ Example:
     >>> spark.range(1).sample(-1.0)
     Traceback (most recent call last):
     ...
-    pyspark.sql.utils.IllegalArgumentException: requirement failed: Sampling 
fraction (-1.0) must be on interval [0, 1] without replacement
+    pyspark.errors.exceptions.IllegalArgumentException: requirement failed: 
Sampling fraction (-1.0) must be on interval [0, 1] without replacement
 
 Solution:
 
@@ -474,6 +475,7 @@ Example:
 
 .. code-block:: python
 
+    >>> import pyspark.sql.functions as F
     >>> from pyspark.sql.functions import udf
     >>> def f(x):
     ...   return F.abs(x)
@@ -512,7 +514,7 @@ Example:
       File "<stdin>", line 1, in <lambda>
     ZeroDivisionError: division by zero
     ...
-    pyspark.sql.utils.StreamingQueryException: Query q1 [id = 
ced5797c-74e2-4079-825b-f3316b327c7d, runId = 
65bacaf3-9d51-476a-80ce-0ac388d4906a] terminated with exception: Writing job 
aborted
+    pyspark.errors.exceptions.StreamingQueryException: [STREAM_FAILED] Query 
[id = 74eb53a8-89bd-49b0-9313-14d29eed03aa, runId = 
9f2d5cf6-a373-478d-b718-2c2b6d8a0f24] terminated with exception: Job aborted
 
 Solution:
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to