This is an automated email from the ASF dual-hosted git repository.

dongjoon pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new f05b658443c [SPARK-42945][CONNECT][FOLLOWUP] Disable JVM stack trace 
by default
f05b658443c is described below

commit f05b658443c59cf886aed0ea8ad8c75f502d18ac
Author: Takuya UESHIN <ues...@databricks.com>
AuthorDate: Thu May 11 21:23:15 2023 -0700

    [SPARK-42945][CONNECT][FOLLOWUP] Disable JVM stack trace by default
    
    ### What changes were proposed in this pull request?
    
    This is a follow-up of #40575.
    
    Disables JVM stack trace by default.
    
    ```py
    % ./bin/pyspark --remote local
    ...
    >>> spark.conf.set("spark.sql.ansi.enabled", True)
    >>> spark.sql('select 1/0').show()
    ...
    Traceback (most recent call last):
    ...
    pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] 
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL 
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this 
error.
    == SQL(line 1, position 8) ==
    select 1/0
           ^^^
    
    >>>
    >>> spark.conf.set("spark.sql.pyspark.jvmStacktrace.enabled", True)
    >>> spark.sql('select 1/0').show()
    ...
    Traceback (most recent call last):
    ...
    pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] 
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL 
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this 
error.
    == SQL(line 1, position 8) ==
    select 1/0
           ^^^
    
    JVM stacktrace:
    org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by 
zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
    == SQL(line 1, position 8) ==
    select 1/0
           ^^^
    
            at 
org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
            at 
org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
    ...
    ```
    
    ### Why are the changes needed?
    
    Currently JVM stack trace is enabled by default.
    
    ```py
    % ./bin/pyspark --remote local
    ...
    >>> spark.conf.set("spark.sql.ansi.enabled", True)
    >>> spark.sql('select 1/0').show()
    ...
    Traceback (most recent call last):
    ...
    pyspark.errors.exceptions.connect.ArithmeticException: [DIVIDE_BY_ZERO] 
Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL 
instead. If necessary set "spark.sql.ansi.enabled" to "false" to bypass this 
error.
    == SQL(line 1, position 8) ==
    select 1/0
           ^^^
    
    JVM stacktrace:
    org.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by 
zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
    == SQL(line 1, position 8) ==
    select 1/0
           ^^^
    
            at 
org.apache.spark.sql.errors.QueryExecutionErrors$.divideByZeroError(QueryExecutionErrors.scala:226)
            at 
org.apache.spark.sql.catalyst.expressions.DivModLike.eval(arithmetic.scala:674)
    ...
    ```
    
    ### Does this PR introduce _any_ user-facing change?
    
    Users won't see the JVM stack trace by default.
    
    ### How was this patch tested?
    
    Existing tests.
    
    Closes #41148 from ueshin/issues/SPARK-42945/default.
    
    Authored-by: Takuya UESHIN <ues...@databricks.com>
    Signed-off-by: Dongjoon Hyun <dongj...@apache.org>
---
 .../org/apache/spark/sql/connect/service/SparkConnectService.scala      | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git 
a/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala
 
b/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala
index b444fc67ce1..c1647fd85a0 100644
--- 
a/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala
+++ 
b/connector/connect/server/src/main/scala/org/apache/spark/sql/connect/service/SparkConnectService.scala
@@ -125,7 +125,7 @@ class SparkConnectService(debug: Boolean)
       SparkConnectService
         .getOrCreateIsolatedSession(userId, sessionId)
         .session
-    val stackTraceEnabled = 
session.conf.get(PYSPARK_JVM_STACKTRACE_ENABLED.key, "true").toBoolean
+    val stackTraceEnabled = session.conf.get(PYSPARK_JVM_STACKTRACE_ENABLED)
 
     {
       case se: SparkException if isPythonExecutionException(se) =>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to