[ 
https://issues.apache.org/jira/browse/SPARK-34126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17273761#comment-17273761
 ] 

Attila Zsolt Piros commented on SPARK-34126:
--------------------------------------------

[~shikui] are you using the " -f <filename>" argument of the spark-sql?

If yes please make sure the "hive.cli.errors.ignore" is false. 

The relevant code is:

https://github.com/apache/spark/blob/b350258c88dfd3eddb9392be5b85a3390a2d39e5/sql/hive-thriftserver/src/main/scala/org/apache/spark/sql/hive/thriftserver/SparkSQLCLIDriver.scala#L504-L508

As you see it must stop at the first error when the "ignoreErrors" is false.


> SQL running error, spark does not exit, resulting in data quality problems
> --------------------------------------------------------------------------
>
>                 Key: SPARK-34126
>                 URL: https://issues.apache.org/jira/browse/SPARK-34126
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 3.0.1
>         Environment: spark3.0.1 on yarn 
>            Reporter: shikui ye
>            Priority: Major
>
> Spark SQL executes a SQL file containing multiple SQL segments. Because one 
> of the SQL segments fails to run, but spark driver or spark context does not 
> exit, an error will occur. The table written by the SQL segment is empty or 
> old data. Depending on this problematic table, the subsequent SQL will have 
> data quality problems even if it runs successfully.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to