This is an automated email from the ASF dual-hosted git repository.

gengliang pushed a commit to branch branch-3.3
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/branch-3.3 by this push:
     new e088c820e1e [SPARK-39212][SQL][3.3] Use double quotes for values of 
SQL configs/DS options in error messages
e088c820e1e is described below

commit e088c820e1ee5736e130f5d7d1030990b0059141
Author: Max Gekk <max.g...@gmail.com>
AuthorDate: Thu May 19 16:51:20 2022 +0800

    [SPARK-39212][SQL][3.3] Use double quotes for values of SQL configs/DS 
options in error messages
    
    ### What changes were proposed in this pull request?
    Wrap values of SQL configs and datasource options in error messages by 
double quotes. Added the `toDSOption()` method to `QueryErrorsBase` to quote DS 
options.
    
    This is a backport of https://github.com/apache/spark/pull/36579.
    
    ### Why are the changes needed?
    1. To highlight SQL config/DS option values and make them more visible for 
users.
    2. To be able to easily parse values from error text.
    3. To be consistent to other outputs of identifiers, sql statement and etc. 
where Spark uses quotes or ticks.
    
    ### Does this PR introduce _any_ user-facing change?
    Yes, it changes user-facing error messages.
    
    ### How was this patch tested?
    By running the modified test suites:
    ```
    $ build/sbt "testOnly *QueryCompilationErrorsSuite"
    $ build/sbt "testOnly *QueryExecutionAnsiErrorsSuite"
    $ build/sbt "testOnly *QueryExecutionErrorsSuite"
    ```
    
    Authored-by: Max Gekk <max.gekkgmail.com>
    Signed-off-by: Max Gekk <max.gekkgmail.com>
    (cherry picked from commit 96f4b7dbc1facd1a38be296263606aa312861c95)
    Signed-off-by: Max Gekk <max.gekkgmail.com>
    
    Closes #36600 from MaxGekk/move-ise-from-query-errors-3.3-2.
    
    Authored-by: Max Gekk <max.g...@gmail.com>
    Signed-off-by: Gengliang Wang <gengli...@apache.org>
---
 core/src/main/resources/error/error-classes.json   | 18 +++---
 .../org/apache/spark/SparkThrowableSuite.scala     |  4 +-
 .../apache/spark/sql/errors/QueryErrorsBase.scala  |  4 ++
 .../spark/sql/errors/QueryExecutionErrors.scala    | 20 ++++---
 .../resources/sql-tests/results/ansi/array.sql.out | 24 ++++----
 .../resources/sql-tests/results/ansi/cast.sql.out  | 70 +++++++++++-----------
 .../resources/sql-tests/results/ansi/date.sql.out  |  6 +-
 .../results/ansi/datetime-parsing-invalid.sql.out  |  4 +-
 .../ansi/decimalArithmeticOperations.sql.out       |  8 +--
 .../sql-tests/results/ansi/interval.sql.out        | 40 ++++++-------
 .../resources/sql-tests/results/ansi/map.sql.out   |  8 +--
 .../results/ansi/string-functions.sql.out          |  8 +--
 .../sql-tests/results/ansi/timestamp.sql.out       |  2 +-
 .../resources/sql-tests/results/interval.sql.out   | 18 +++---
 .../sql-tests/results/postgreSQL/boolean.sql.out   | 32 +++++-----
 .../sql-tests/results/postgreSQL/float4.sql.out    | 14 ++---
 .../sql-tests/results/postgreSQL/float8.sql.out    | 10 ++--
 .../sql-tests/results/postgreSQL/int4.sql.out      | 12 ++--
 .../sql-tests/results/postgreSQL/int8.sql.out      | 22 +++----
 .../results/postgreSQL/select_having.sql.out       |  2 +-
 .../sql-tests/results/postgreSQL/text.sql.out      |  4 +-
 .../results/postgreSQL/window_part2.sql.out        |  6 +-
 .../results/postgreSQL/window_part3.sql.out        |  2 +-
 .../results/postgreSQL/window_part4.sql.out        |  2 +-
 .../results/timestampNTZ/timestamp-ansi.sql.out    |  4 +-
 .../udf/postgreSQL/udf-select_having.sql.out       |  2 +-
 .../sql/errors/QueryExecutionErrorsSuite.scala     | 12 ++--
 27 files changed, 184 insertions(+), 174 deletions(-)

diff --git a/core/src/main/resources/error/error-classes.json 
b/core/src/main/resources/error/error-classes.json
index 7fef9e563c2..e4ab3a7a353 100644
--- a/core/src/main/resources/error/error-classes.json
+++ b/core/src/main/resources/error/error-classes.json
@@ -4,7 +4,7 @@
     "sqlState" : "42000"
   },
   "ARITHMETIC_OVERFLOW" : {
-    "message" : [ "<message>.<alternative> If necessary set <config> to false 
(except for ANSI interval type) to bypass this error.<context>" ],
+    "message" : [ "<message>.<alternative> If necessary set <config> to 
\"false\" (except for ANSI interval type) to bypass this error.<context>" ],
     "sqlState" : "22003"
   },
   "CANNOT_CAST_DATATYPE" : {
@@ -12,7 +12,7 @@
     "sqlState" : "22005"
   },
   "CANNOT_CHANGE_DECIMAL_PRECISION" : {
-    "message" : [ "<value> cannot be represented as Decimal(<precision>, 
<scale>). If necessary set <config> to false to bypass this error.<details>" ],
+    "message" : [ "<value> cannot be represented as Decimal(<precision>, 
<scale>). If necessary set <config> to \"false\" to bypass this 
error.<details>" ],
     "sqlState" : "22005"
   },
   "CANNOT_PARSE_DECIMAL" : {
@@ -26,11 +26,11 @@
     "message" : [ "Cannot use a mixture of aggregate function and group 
aggregate pandas UDF" ]
   },
   "CAST_INVALID_INPUT" : {
-    "message" : [ "The value <value> of the type <sourceType> cannot be cast 
to <targetType> because it is malformed. To return NULL instead, use 
`try_cast`. If necessary set <config> to false to bypass this error.<details>" 
],
+    "message" : [ "The value <value> of the type <sourceType> cannot be cast 
to <targetType> because it is malformed. To return NULL instead, use 
`try_cast`. If necessary set <config> to \"false\" to bypass this 
error.<details>" ],
     "sqlState" : "42000"
   },
   "CAST_OVERFLOW" : {
-    "message" : [ "The value <value> of the type <sourceType> cannot be cast 
to <targetType> due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set <config> to false to bypass this error." ],
+    "message" : [ "The value <value> of the type <sourceType> cannot be cast 
to <targetType> due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set <config> to \"false\" to bypass this error." ],
     "sqlState" : "22005"
   },
   "CONCURRENT_QUERY" : {
@@ -41,7 +41,7 @@
     "sqlState" : "22008"
   },
   "DIVIDE_BY_ZERO" : {
-    "message" : [ "Division by zero. To return NULL instead, use `try_divide`. 
If necessary set <config> to false (except for ANSI interval type) to bypass 
this error.<details>" ],
+    "message" : [ "Division by zero. To return NULL instead, use `try_divide`. 
If necessary set <config> to \"false\" (except for ANSI interval type) to 
bypass this error.<details>" ],
     "sqlState" : "22012"
   },
   "DUPLICATE_KEY" : {
@@ -93,17 +93,17 @@
     "message" : [ "<message>" ]
   },
   "INVALID_ARRAY_INDEX" : {
-    "message" : [ "The index <indexValue> is out of bounds. The array has 
<arraySize> elements. If necessary set <config> to false to bypass this error." 
]
+    "message" : [ "The index <indexValue> is out of bounds. The array has 
<arraySize> elements. If necessary set <config> to \"false\" to bypass this 
error." ]
   },
   "INVALID_ARRAY_INDEX_IN_ELEMENT_AT" : {
-    "message" : [ "The index <indexValue> is out of bounds. The array has 
<arraySize> elements. To return NULL instead, use `try_element_at`. If 
necessary set <config> to false to bypass this error." ]
+    "message" : [ "The index <indexValue> is out of bounds. The array has 
<arraySize> elements. To return NULL instead, use `try_element_at`. If 
necessary set <config> to \"false\" to bypass this error." ]
   },
   "INVALID_FIELD_NAME" : {
     "message" : [ "Field name <fieldName> is invalid: <path> is not a struct." 
],
     "sqlState" : "42000"
   },
   "INVALID_FRACTION_OF_SECOND" : {
-    "message" : [ "The fraction of sec must be zero. Valid range is [0, 60]. 
If necessary set <config> to false to bypass this error. " ],
+    "message" : [ "The fraction of sec must be zero. Valid range is [0, 60]. 
If necessary set <config> to \"false\" to bypass this error. " ],
     "sqlState" : "22023"
   },
   "INVALID_JSON_SCHEMA_MAP_TYPE" : {
@@ -118,7 +118,7 @@
     "sqlState" : "42000"
   },
   "MAP_KEY_DOES_NOT_EXIST" : {
-    "message" : [ "Key <keyValue> does not exist. To return NULL instead, use 
'try_element_at'. If necessary set <config> to false to bypass this 
error.<details>" ]
+    "message" : [ "Key <keyValue> does not exist. To return NULL instead, use 
`try_element_at`. If necessary set <config> to \"false\" to bypass this 
error.<details>" ]
   },
   "MISSING_COLUMN" : {
     "message" : [ "Column '<columnName>' does not exist. Did you mean one of 
the following? [<proposal>]" ],
diff --git a/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala 
b/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
index acfe721914b..208e278e2a1 100644
--- a/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
+++ b/core/src/test/scala/org/apache/spark/SparkThrowableSuite.scala
@@ -125,8 +125,8 @@ class SparkThrowableSuite extends SparkFunSuite {
 
     // Does not fail with too many args (expects 0 args)
     assert(getMessage("DIVIDE_BY_ZERO", Array("foo", "bar", "baz")) ==
-      "Division by zero. To return NULL instead, use `try_divide`. If 
necessary set foo to false " +
-        "(except for ANSI interval type) to bypass this error.bar")
+      "Division by zero. To return NULL instead, use `try_divide`. If 
necessary set foo " +
+      "to \"false\" (except for ANSI interval type) to bypass this error.bar")
   }
 
   test("Error message is formatted") {
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryErrorsBase.scala 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryErrorsBase.scala
index d51ee13acef..89bc1039e73 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryErrorsBase.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryErrorsBase.scala
@@ -63,4 +63,8 @@ trait QueryErrorsBase {
   def toSQLConf(conf: String): String = {
     quoteByDefault(conf)
   }
+
+  def toDSOption(option: String): String = {
+    quoteByDefault(option)
+  }
 }
diff --git 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
index c350a6b28ba..9f07dd1ce19 100644
--- 
a/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
+++ 
b/sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala
@@ -214,8 +214,12 @@ object QueryExecutionErrors extends QueryErrorsBase {
   }
 
   def mapKeyNotExistError(key: Any, dataType: DataType, context: String): 
NoSuchElementException = {
-    new SparkNoSuchElementException(errorClass = "MAP_KEY_DOES_NOT_EXIST",
-      messageParameters = Array(toSQLValue(key, dataType), 
SQLConf.ANSI_ENABLED.key, context))
+    new SparkNoSuchElementException(
+      errorClass = "MAP_KEY_DOES_NOT_EXIST",
+      messageParameters = Array(
+        toSQLValue(key, dataType),
+        toSQLConf(SQLConf.ANSI_ENABLED.key),
+        context))
   }
 
   def inputTypeUnsupportedError(dataType: DataType): Throwable = {
@@ -578,6 +582,7 @@ object QueryExecutionErrors extends QueryErrorsBase {
     new IllegalStateException(s"unrecognized format $format")
   }
 
+  // scalastyle:off line.size.limit
   def sparkUpgradeInReadingDatesError(
       format: String, config: String, option: String): SparkUpgradeException = 
{
     new SparkUpgradeException(
@@ -590,14 +595,15 @@ object QueryExecutionErrors extends QueryErrorsBase {
            |Spark 2.x or legacy versions of Hive, which uses a legacy hybrid 
calendar
            |that is different from Spark 3.0+'s Proleptic Gregorian calendar.
            |See more details in SPARK-31404. You can set the SQL config 
${toSQLConf(config)} or
-           |the datasource option '$option' to 'LEGACY' to rebase the datetime 
values
+           |the datasource option ${toDSOption(option)} to "LEGACY" to rebase 
the datetime values
            |w.r.t. the calendar difference during reading. To read the 
datetime values
-           |as it is, set the SQL config ${toSQLConf(config)} or the 
datasource option '$option'
-           |to 'CORRECTED'.
+           |as it is, set the SQL config ${toSQLConf(config)} or the 
datasource option ${toDSOption(option)}
+           |to "CORRECTED".
            |""".stripMargin),
       cause = null
     )
   }
+  // scalastyle:on line.size.limit
 
   def sparkUpgradeInWritingDatesError(format: String, config: String): 
SparkUpgradeException = {
     new SparkUpgradeException(
@@ -609,9 +615,9 @@ object QueryExecutionErrors extends QueryErrorsBase {
           |into $format files can be dangerous, as the files may be read by 
Spark 2.x
           |or legacy versions of Hive later, which uses a legacy hybrid 
calendar that
           |is different from Spark 3.0+'s Proleptic Gregorian calendar. See 
more
-          |details in SPARK-31404. You can set ${toSQLConf(config)} to 
'LEGACY' to rebase the
+          |details in SPARK-31404. You can set ${toSQLConf(config)} to 
"LEGACY" to rebase the
           |datetime values w.r.t. the calendar difference during writing, to 
get maximum
-          |interoperability. Or set ${toSQLConf(config)} to 'CORRECTED' to 
write the datetime
+          |interoperability. Or set ${toSQLConf(config)} to "CORRECTED" to 
write the datetime
           |values as it is, if you are 100% sure that the written files will 
only be read by
           |Spark 3.0+ or other systems that use Proleptic Gregorian calendar.
           |""".stripMargin),
diff --git a/sql/core/src/test/resources/sql-tests/results/ansi/array.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/array.sql.out
index 64a7cc68b9c..25d2704c2c8 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/array.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/array.sql.out
@@ -168,7 +168,7 @@ select element_at(array(1, 2, 3), 5)
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+The index 5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 
 
 -- !query
@@ -177,7 +177,7 @@ select element_at(array(1, 2, 3), -5)
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index -5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+The index -5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 
 
 -- !query
@@ -195,7 +195,7 @@ select elt(4, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 4 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index 4 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -204,7 +204,7 @@ select elt(0, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 0 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index 0 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -213,7 +213,7 @@ select elt(-1, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index -1 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index -1 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -254,7 +254,7 @@ select array(1, 2, 3)[5]
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 5 is out of bounds. The array has 3 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index 5 is out of bounds. The array has 3 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -263,7 +263,7 @@ select array(1, 2, 3)[-1]
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index -1 is out of bounds. The array has 3 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index -1 is out of bounds. The array has 3 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -337,7 +337,7 @@ select element_at(array(1, 2, 3), 5)
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+The index 5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 
 
 -- !query
@@ -346,7 +346,7 @@ select element_at(array(1, 2, 3), -5)
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index -5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+The index -5 is out of bounds. The array has 3 elements. To return NULL 
instead, use `try_element_at`. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 
 
 -- !query
@@ -364,7 +364,7 @@ select elt(4, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 4 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index 4 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -373,7 +373,7 @@ select elt(0, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index 0 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index 0 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -382,4 +382,4 @@ select elt(-1, '123', '456')
 struct<>
 -- !query output
 org.apache.spark.SparkArrayIndexOutOfBoundsException
-The index -1 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The index -1 is out of bounds. The array has 2 elements. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
diff --git a/sql/core/src/test/resources/sql-tests/results/ansi/cast.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/cast.sql.out
index 1ca6c64b4e0..654433c0ca5 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/cast.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/cast.sql.out
@@ -8,7 +8,7 @@ SELECT CAST('1.23' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1.23' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1.23' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('1.23' AS int)
        ^^^^^^^^^^^^^^^^^^^
@@ -20,7 +20,7 @@ SELECT CAST('1.23' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1.23' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1.23' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('1.23' AS long)
        ^^^^^^^^^^^^^^^^^^^^
@@ -32,7 +32,7 @@ SELECT CAST('-4.56' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '-4.56' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '-4.56' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('-4.56' AS int)
        ^^^^^^^^^^^^^^^^^^^^
@@ -44,7 +44,7 @@ SELECT CAST('-4.56' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '-4.56' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '-4.56' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('-4.56' AS long)
        ^^^^^^^^^^^^^^^^^^^^^
@@ -56,7 +56,7 @@ SELECT CAST('abc' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'abc' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'abc' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('abc' AS int)
        ^^^^^^^^^^^^^^^^^^
@@ -68,7 +68,7 @@ SELECT CAST('abc' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'abc' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'abc' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('abc' AS long)
        ^^^^^^^^^^^^^^^^^^^
@@ -80,7 +80,7 @@ SELECT CAST('abc' AS float)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'abc' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'abc' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('abc' AS float)
        ^^^^^^^^^^^^^^^^^^^^
@@ -92,7 +92,7 @@ SELECT CAST('abc' AS double)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'abc' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'abc' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('abc' AS double)
        ^^^^^^^^^^^^^^^^^^^^^
@@ -104,7 +104,7 @@ SELECT CAST('1234567890123' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1234567890123' of the type "STRING" cannot be cast to "INT" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1234567890123' of the type "STRING" cannot be cast to "INT" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('1234567890123' AS int)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -116,7 +116,7 @@ SELECT CAST('12345678901234567890123' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '12345678901234567890123' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+The value '12345678901234567890123' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('12345678901234567890123' AS long)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -128,7 +128,7 @@ SELECT CAST('' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('' AS int)
        ^^^^^^^^^^^^^^^
@@ -140,7 +140,7 @@ SELECT CAST('' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('' AS long)
        ^^^^^^^^^^^^^^^^
@@ -152,7 +152,7 @@ SELECT CAST('' AS float)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('' AS float)
        ^^^^^^^^^^^^^^^^^
@@ -164,7 +164,7 @@ SELECT CAST('' AS double)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('' AS double)
        ^^^^^^^^^^^^^^^^^^
@@ -192,7 +192,7 @@ SELECT CAST('123.a' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '123.a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '123.a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('123.a' AS int)
        ^^^^^^^^^^^^^^^^^^^^
@@ -204,7 +204,7 @@ SELECT CAST('123.a' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '123.a' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '123.a' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('123.a' AS long)
        ^^^^^^^^^^^^^^^^^^^^^
@@ -216,7 +216,7 @@ SELECT CAST('123.a' AS float)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '123.a' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '123.a' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('123.a' AS float)
        ^^^^^^^^^^^^^^^^^^^^^^
@@ -228,7 +228,7 @@ SELECT CAST('123.a' AS double)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '123.a' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '123.a' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('123.a' AS double)
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -248,7 +248,7 @@ SELECT CAST('-2147483649' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '-2147483649' of the type "STRING" cannot be cast to "INT" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '-2147483649' of the type "STRING" cannot be cast to "INT" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('-2147483649' AS int)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -268,7 +268,7 @@ SELECT CAST('2147483648' AS int)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '2147483648' of the type "STRING" cannot be cast to "INT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '2147483648' of the type "STRING" cannot be cast to "INT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('2147483648' AS int)
        ^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -288,7 +288,7 @@ SELECT CAST('-9223372036854775809' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '-9223372036854775809' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+The value '-9223372036854775809' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('-9223372036854775809' AS long)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -308,7 +308,7 @@ SELECT CAST('9223372036854775808' AS long)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '9223372036854775808' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+The value '9223372036854775808' of the type "STRING" cannot be cast to 
"BIGINT" because it is malformed. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT CAST('9223372036854775808' AS long)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -567,7 +567,7 @@ select cast('1中文' as tinyint)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1中文' of the type "STRING" cannot be cast to "TINYINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1中文' of the type "STRING" cannot be cast to "TINYINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('1中文' as tinyint)
        ^^^^^^^^^^^^^^^^^^^^^^
@@ -579,7 +579,7 @@ select cast('1中文' as smallint)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1中文' of the type "STRING" cannot be cast to "SMALLINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1中文' of the type "STRING" cannot be cast to "SMALLINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('1中文' as smallint)
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -591,7 +591,7 @@ select cast('1中文' as INT)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1中文' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1中文' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('1中文' as INT)
        ^^^^^^^^^^^^^^^^^^
@@ -603,7 +603,7 @@ select cast('中文1' as bigint)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '中文1' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '中文1' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('中文1' as bigint)
        ^^^^^^^^^^^^^^^^^^^^^
@@ -615,7 +615,7 @@ select cast('1中文' as bigint)
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1中文' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1中文' of the type "STRING" cannot be cast to "BIGINT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('1中文' as bigint)
        ^^^^^^^^^^^^^^^^^^^^^
@@ -646,7 +646,7 @@ struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
 The value '    
- xyz   
' of the type "STRING" cannot be cast to "BOOLEAN" because it is malformed. To 
return NULL instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" 
to false to bypass this error.
+ xyz   
' of the type "STRING" cannot be cast to "BOOLEAN" because it is malformed. To 
return NULL instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" 
to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('\t\n xyz \t\r' as boolean)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -666,7 +666,7 @@ select cast('123.45' as decimal(4, 2))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, 123.45, 5, 2) cannot be represented as Decimal(4, 2). If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+Decimal(expanded, 123.45, 5, 2) cannot be represented as Decimal(4, 2). If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('123.45' as decimal(4, 2))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -678,7 +678,7 @@ select cast('xyz' as decimal(4, 2))
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'xyz' of the type "STRING" cannot be cast to "DECIMAL(4,2)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'xyz' of the type "STRING" cannot be cast to "DECIMAL(4,2)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('xyz' as decimal(4, 2))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -698,7 +698,7 @@ select cast('a' as date)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'a' of the type "STRING" cannot be cast to "DATE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DATE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('a' as date)
        ^^^^^^^^^^^^^^^^^
@@ -718,7 +718,7 @@ select cast('a' as timestamp)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'a' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('a' as timestamp)
        ^^^^^^^^^^^^^^^^^^^^^^
@@ -738,7 +738,7 @@ select cast('a' as timestamp_ntz)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'a' of the type "STRING" cannot be cast to "TIMESTAMP_NTZ" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "TIMESTAMP_NTZ" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast('a' as timestamp_ntz)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -750,7 +750,7 @@ select cast(cast('inf' as double) as timestamp)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value Infinity of the type "DOUBLE" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value Infinity of the type "DOUBLE" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast(cast('inf' as double) as timestamp)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -762,7 +762,7 @@ select cast(cast('inf' as float) as timestamp)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value Infinity of the type "DOUBLE" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value Infinity of the type "DOUBLE" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast(cast('inf' as float) as timestamp)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/sql/core/src/test/resources/sql-tests/results/ansi/date.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/date.sql.out
index 844ecacc9aa..2cf50284d66 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/date.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/date.sql.out
@@ -232,7 +232,7 @@ select next_day("xx", "Mon")
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'xx' of the type "STRING" cannot be cast to "DATE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'xx' of the type "STRING" cannot be cast to "DATE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select next_day("xx", "Mon")
        ^^^^^^^^^^^^^^^^^^^^^
@@ -327,7 +327,7 @@ select date_add('2011-11-11', '1.2')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1.2' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1.2' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select date_add('2011-11-11', '1.2')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -438,7 +438,7 @@ select date_sub(date'2011-11-11', '1.2')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value '1.2' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1.2' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select date_sub(date'2011-11-11', '1.2')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/datetime-parsing-invalid.sql.out
 
b/sql/core/src/test/resources/sql-tests/results/ansi/datetime-parsing-invalid.sql.out
index 1632ea0a239..d1eb604d4fc 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/ansi/datetime-parsing-invalid.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/ansi/datetime-parsing-invalid.sql.out
@@ -242,7 +242,7 @@ select cast("Unparseable" as timestamp)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'Unparseable' of the type "STRING" cannot be cast to "TIMESTAMP" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value 'Unparseable' of the type "STRING" cannot be cast to "TIMESTAMP" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast("Unparseable" as timestamp)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -254,7 +254,7 @@ select cast("Unparseable" as date)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value 'Unparseable' of the type "STRING" cannot be cast to "DATE" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'Unparseable' of the type "STRING" cannot be cast to "DATE" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select cast("Unparseable" as date)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/decimalArithmeticOperations.sql.out
 
b/sql/core/src/test/resources/sql-tests/results/ansi/decimalArithmeticOperations.sql.out
index 5a2f964a31f..5db875ff10a 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/ansi/decimalArithmeticOperations.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/ansi/decimalArithmeticOperations.sql.out
@@ -76,7 +76,7 @@ select (5e36BD + 0.1) + 5e36BD
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, 10000000000000000000000000000000000000.1, 39, 1) cannot be 
represented as Decimal(38, 1). If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+Decimal(expanded, 10000000000000000000000000000000000000.1, 39, 1) cannot be 
represented as Decimal(38, 1). If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 == SQL(line 1, position 7) ==
 select (5e36BD + 0.1) + 5e36BD
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -88,7 +88,7 @@ select (-4e36BD - 0.1) - 7e36BD
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, -11000000000000000000000000000000000000.1, 39, 1) cannot be 
represented as Decimal(38, 1). If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+Decimal(expanded, -11000000000000000000000000000000000000.1, 39, 1) cannot be 
represented as Decimal(38, 1). If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 == SQL(line 1, position 7) ==
 select (-4e36BD - 0.1) - 7e36BD
        ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -100,7 +100,7 @@ select 12345678901234567890.0 * 12345678901234567890.0
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, 152415787532388367501905199875019052100, 39, 0) cannot be 
represented as Decimal(38, 2). If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+Decimal(expanded, 152415787532388367501905199875019052100, 39, 0) cannot be 
represented as Decimal(38, 2). If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 == SQL(line 1, position 7) ==
 select 12345678901234567890.0 * 12345678901234567890.0
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -112,7 +112,7 @@ select 1e35BD / 0.1
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, 1000000000000000000000000000000000000, 37, 0) cannot be 
represented as Decimal(38, 6). If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+Decimal(expanded, 1000000000000000000000000000000000000, 37, 0) cannot be 
represented as Decimal(38, 6). If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 == SQL(line 1, position 7) ==
 select 1e35BD / 0.1
        ^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
index ddb829f6b5f..5d2ead16511 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/interval.sql.out
@@ -122,7 +122,7 @@ select interval 2 second * 'a'
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select interval 2 second * 'a'
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -134,7 +134,7 @@ select interval 2 second / 'a'
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select interval 2 second / 'a'
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -146,7 +146,7 @@ select interval 2 year * 'a'
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select interval 2 year * 'a'
        ^^^^^^^^^^^^^^^^^^^^^
@@ -158,7 +158,7 @@ select interval 2 year / 'a'
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select interval 2 year / 'a'
        ^^^^^^^^^^^^^^^^^^^^^
@@ -186,7 +186,7 @@ select 'a' * interval 2 second
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select 'a' * interval 2 second
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -198,7 +198,7 @@ select 'a' * interval 2 year
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "DOUBLE" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select 'a' * interval 2 year
        ^^^^^^^^^^^^^^^^^^^^^
@@ -228,7 +228,7 @@ select interval '2 seconds' / 0
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select interval '2 seconds' / 0
        ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -264,7 +264,7 @@ select interval '2' year / 0
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select interval '2' year / 0
        ^^^^^^^^^^^^^^^^^^^^^
@@ -664,7 +664,7 @@ select make_interval(0, 0, 0, 0, 0, 0, 1234567890123456789)
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Decimal(expanded, 1234567890123456789, 20, 0) cannot be represented as 
Decimal(18, 6). If necessary set "spark.sql.ansi.enabled" to false to bypass 
this error.
+Decimal(expanded, 1234567890123456789, 20, 0) cannot be represented as 
Decimal(18, 6). If necessary set "spark.sql.ansi.enabled" to "false" to bypass 
this error.
 == SQL(line 1, position 7) ==
 select make_interval(0, 0, 0, 0, 0, 0, 1234567890123456789)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1516,7 +1516,7 @@ select '4 11:11' - interval '4 22:12' day to minute
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value '4 11:11' of the type "STRING" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '4 11:11' of the type "STRING" cannot be cast to "TIMESTAMP" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select '4 11:11' - interval '4 22:12' day to minute
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1528,7 +1528,7 @@ select '4 12:12:12' + interval '4 22:12' day to minute
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value '4 12:12:12' of the type "STRING" cannot be cast to "TIMESTAMP" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value '4 12:12:12' of the type "STRING" cannot be cast to "TIMESTAMP" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select '4 12:12:12' + interval '4 22:12' day to minute
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1566,7 +1566,7 @@ select str - interval '4 22:12' day to minute from 
interval_view
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value '1' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select str - interval '4 22:12' day to minute from interval_view
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1578,7 +1578,7 @@ select str + interval '4 22:12' day to minute from 
interval_view
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value '1' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select str + interval '4 22:12' day to minute from interval_view
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -1789,7 +1789,7 @@ select -(a) from values (interval '-2147483648 months', 
interval '2147483647 mon
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -1798,7 +1798,7 @@ select a - b from values (interval '-2147483648 months', 
interval '2147483647 mo
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -1807,7 +1807,7 @@ select b + interval '1 month' from values (interval 
'-2147483648 months', interv
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -2036,7 +2036,7 @@ SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2048,7 +2048,7 @@ SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1L
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1L
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2094,7 +2094,7 @@ SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO 
SECOND) / -1
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO SECOND) / -1
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2106,7 +2106,7 @@ SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO 
SECOND) / -1L
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO SECOND) / -1L
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git a/sql/core/src/test/resources/sql-tests/results/ansi/map.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/map.sql.out
index eb2f305e4b0..b54cc6d48bf 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/map.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/map.sql.out
@@ -8,7 +8,7 @@ select element_at(map(1, 'a', 2, 'b'), 5)
 struct<>
 -- !query output
 org.apache.spark.SparkNoSuchElementException
-Key 5 does not exist. To return NULL instead, use 'try_element_at'. If 
necessary set spark.sql.ansi.enabled to false to bypass this error.
+Key 5 does not exist. To return NULL instead, use `try_element_at`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select element_at(map(1, 'a', 2, 'b'), 5)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -20,7 +20,7 @@ select map(1, 'a', 2, 'b')[5]
 struct<>
 -- !query output
 org.apache.spark.SparkNoSuchElementException
-Key 5 does not exist. To return NULL instead, use 'try_element_at'. If 
necessary set spark.sql.ansi.enabled to false to bypass this error.
+Key 5 does not exist. To return NULL instead, use `try_element_at`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select map(1, 'a', 2, 'b')[5]
        ^^^^^^^^^^^^^^^^^^^^^^
@@ -114,7 +114,7 @@ select element_at(map(1, 'a', 2, 'b'), 5)
 struct<>
 -- !query output
 org.apache.spark.SparkNoSuchElementException
-Key 5 does not exist. To return NULL instead, use 'try_element_at'. If 
necessary set spark.sql.ansi.enabled to false to bypass this error.
+Key 5 does not exist. To return NULL instead, use `try_element_at`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select element_at(map(1, 'a', 2, 'b'), 5)
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -126,7 +126,7 @@ select element_at(map('a', 1, 'b', 2), 'c')
 struct<>
 -- !query output
 org.apache.spark.SparkNoSuchElementException
-Key 'c' does not exist. To return NULL instead, use 'try_element_at'. If 
necessary set spark.sql.ansi.enabled to false to bypass this error.
+Key 'c' does not exist. To return NULL instead, use `try_element_at`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select element_at(map('a', 1, 'b', 2), 'c')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/string-functions.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/string-functions.sql.out
index 409ef51edd5..ad388e211f5 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/ansi/string-functions.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/ansi/string-functions.sql.out
@@ -82,7 +82,7 @@ select left("abcd", -2), left("abcd", 0), left("abcd", 'a')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 42) ==
 ...t("abcd", -2), left("abcd", 0), left("abcd", 'a')
                                    ^^^^^^^^^^^^^^^^^
@@ -110,7 +110,7 @@ select right("abcd", -2), right("abcd", 0), right("abcd", 
'a')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'a' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 44) ==
 ...("abcd", -2), right("abcd", 0), right("abcd", 'a')
                                    ^^^^^^^^^^^^^^^^^^
@@ -419,7 +419,7 @@ SELECT lpad('hi', 'invalid_length')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'invalid_length' of the type "STRING" cannot be cast to "INT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value 'invalid_length' of the type "STRING" cannot be cast to "INT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT lpad('hi', 'invalid_length')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -431,7 +431,7 @@ SELECT rpad('hi', 'invalid_length')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'invalid_length' of the type "STRING" cannot be cast to "INT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value 'invalid_length' of the type "STRING" cannot be cast to "INT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT rpad('hi', 'invalid_length')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/ansi/timestamp.sql.out 
b/sql/core/src/test/resources/sql-tests/results/ansi/timestamp.sql.out
index 7a76e368460..368cab2eaea 100644
--- a/sql/core/src/test/resources/sql-tests/results/ansi/timestamp.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/ansi/timestamp.sql.out
@@ -98,7 +98,7 @@ SELECT make_timestamp(2021, 07, 11, 6, 30, 60.007)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The fraction of sec must be zero. Valid range is [0, 60]. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The fraction of sec must be zero. Valid range is [0, 60]. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
diff --git a/sql/core/src/test/resources/sql-tests/results/interval.sql.out 
b/sql/core/src/test/resources/sql-tests/results/interval.sql.out
index 61b4f20e5fd..01cc7efb492 100644
--- a/sql/core/src/test/resources/sql-tests/results/interval.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/interval.sql.out
@@ -204,7 +204,7 @@ select interval '2 seconds' / 0
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select interval '2 seconds' / 0
        ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -240,7 +240,7 @@ select interval '2' year / 0
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select interval '2' year / 0
        ^^^^^^^^^^^^^^^^^^^^^
@@ -1745,7 +1745,7 @@ select -(a) from values (interval '-2147483648 months', 
interval '2147483647 mon
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -1754,7 +1754,7 @@ select a - b from values (interval '-2147483648 months', 
interval '2147483647 mo
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -1763,7 +1763,7 @@ select b + interval '1 month' from values (interval 
'-2147483648 months', interv
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -1992,7 +1992,7 @@ SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2004,7 +2004,7 @@ SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1L
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-178956970-8' YEAR TO MONTH) / -1L
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2050,7 +2050,7 @@ SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO 
SECOND) / -1
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO SECOND) / -1
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -2062,7 +2062,7 @@ SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO 
SECOND) / -1L
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to false (except for ANSI interval type) 
to bypass this error.
+Overflow in integral divide. To return NULL instead, use 'try_divide'. If 
necessary set spark.sql.ansi.enabled to "false" (except for ANSI interval type) 
to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT (INTERVAL '-106751991 04:00:54.775808' DAY TO SECOND) / -1L
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/boolean.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/boolean.sql.out
index 45bbaa955b2..fe23273c4d9 100644
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/boolean.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/boolean.sql.out
@@ -56,7 +56,7 @@ SELECT boolean('test') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'test' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'test' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('test') AS error
        ^^^^^^^^^^^^^^^
@@ -76,7 +76,7 @@ SELECT boolean('foo') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'foo' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'foo' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('foo') AS error
        ^^^^^^^^^^^^^^
@@ -104,7 +104,7 @@ SELECT boolean('yeah') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'yeah' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'yeah' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('yeah') AS error
        ^^^^^^^^^^^^^^^
@@ -132,7 +132,7 @@ SELECT boolean('nay') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'nay' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'nay' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('nay') AS error
        ^^^^^^^^^^^^^^
@@ -144,7 +144,7 @@ SELECT boolean('on') AS true
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'on' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'on' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('on') AS true
        ^^^^^^^^^^^^^
@@ -156,7 +156,7 @@ SELECT boolean('off') AS `false`
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'off' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'off' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('off') AS `false`
        ^^^^^^^^^^^^^^
@@ -168,7 +168,7 @@ SELECT boolean('of') AS `false`
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'of' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'of' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('of') AS `false`
        ^^^^^^^^^^^^^
@@ -180,7 +180,7 @@ SELECT boolean('o') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'o' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'o' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('o') AS error
        ^^^^^^^^^^^^
@@ -192,7 +192,7 @@ SELECT boolean('on_') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'on_' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'on_' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('on_') AS error
        ^^^^^^^^^^^^^^
@@ -204,7 +204,7 @@ SELECT boolean('off_') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value 'off_' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'off_' of the type "STRING" cannot be cast to "BOOLEAN" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('off_') AS error
        ^^^^^^^^^^^^^^^
@@ -224,7 +224,7 @@ SELECT boolean('11') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value '11' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '11' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('11') AS error
        ^^^^^^^^^^^^^
@@ -244,7 +244,7 @@ SELECT boolean('000') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value '000' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '000' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('000') AS error
        ^^^^^^^^^^^^^^
@@ -256,7 +256,7 @@ SELECT boolean('') AS error
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value '' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean('') AS error
        ^^^^^^^^^^^
@@ -365,7 +365,7 @@ SELECT boolean(string('  tru e ')) AS invalid
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value '  tru e ' of the type "STRING" cannot be cast to "BOOLEAN" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '  tru e ' of the type "STRING" cannot be cast to "BOOLEAN" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean(string('  tru e ')) AS invalid
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -377,7 +377,7 @@ SELECT boolean(string('')) AS invalid
 struct<>
 -- !query output
 org.apache.spark.SparkRuntimeException
-The value '' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '' of the type "STRING" cannot be cast to "BOOLEAN" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT boolean(string('')) AS invalid
        ^^^^^^^^^^^^^^^^^^^
@@ -524,7 +524,7 @@ INSERT INTO BOOLTBL2
 struct<>
 -- !query output
 org.apache.spark.sql.AnalysisException
-failed to evaluate expression CAST('XXX' AS BOOLEAN): The value 'XXX' of the 
type "STRING" cannot be cast to "BOOLEAN" because it is malformed. To return 
NULL instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" to 
false to bypass this error.
+failed to evaluate expression CAST('XXX' AS BOOLEAN): The value 'XXX' of the 
type "STRING" cannot be cast to "BOOLEAN" because it is malformed. To return 
NULL instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" to 
"false" to bypass this error.
 == SQL(line 2, position 11) ==
    VALUES (boolean('XXX'))
            ^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/float4.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/float4.sql.out
index fa8736d089b..a1399062419 100644
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/float4.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/float4.sql.out
@@ -96,7 +96,7 @@ SELECT float('N A N')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'N A N' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'N A N' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT float('N A N')
        ^^^^^^^^^^^^^^
@@ -108,7 +108,7 @@ SELECT float('NaN x')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'NaN x' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'NaN x' of the type "STRING" cannot be cast to "FLOAT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT float('NaN x')
        ^^^^^^^^^^^^^^
@@ -120,7 +120,7 @@ SELECT float(' INFINITY    x')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value ' INFINITY    x' of the type "STRING" cannot be cast to "FLOAT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value ' INFINITY    x' of the type "STRING" cannot be cast to "FLOAT" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT float(' INFINITY    x')
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -156,7 +156,7 @@ SELECT float(decimal('nan'))
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'nan' of the type "STRING" cannot be cast to "DECIMAL(10,0)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'nan' of the type "STRING" cannot be cast to "DECIMAL(10,0)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 13) ==
 SELECT float(decimal('nan'))
              ^^^^^^^^^^^^^^
@@ -340,7 +340,7 @@ SELECT int(float('2147483647'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value 2.14748365E9 of the type "FLOAT" cannot be cast to "INT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 2.14748365E9 of the type "FLOAT" cannot be cast to "INT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -357,7 +357,7 @@ SELECT int(float('-2147483900'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value -2.1474839E9 of the type "FLOAT" cannot be cast to "INT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value -2.1474839E9 of the type "FLOAT" cannot be cast to "INT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -390,7 +390,7 @@ SELECT bigint(float('-9223380000000000000'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value -9.22338E18 of the type "FLOAT" cannot be cast to "BIGINT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value -9.22338E18 of the type "FLOAT" cannot be cast to "BIGINT" due to an 
overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/float8.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/float8.sql.out
index f90d7c663e0..270332cd196 100644
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/float8.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/float8.sql.out
@@ -128,7 +128,7 @@ SELECT double('N A N')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'N A N' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'N A N' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT double('N A N')
        ^^^^^^^^^^^^^^^
@@ -140,7 +140,7 @@ SELECT double('NaN x')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'NaN x' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'NaN x' of the type "STRING" cannot be cast to "DOUBLE" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT double('NaN x')
        ^^^^^^^^^^^^^^^
@@ -152,7 +152,7 @@ SELECT double(' INFINITY    x')
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value ' INFINITY    x' of the type "STRING" cannot be cast to "DOUBLE" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to false to bypass this error.
+The value ' INFINITY    x' of the type "STRING" cannot be cast to "DOUBLE" 
because it is malformed. To return NULL instead, use `try_cast`. If necessary 
set "spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT double(' INFINITY    x')
        ^^^^^^^^^^^^^^^^^^^^^^^^
@@ -188,7 +188,7 @@ SELECT double(decimal('nan'))
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'nan' of the type "STRING" cannot be cast to "DECIMAL(10,0)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'nan' of the type "STRING" cannot be cast to "DECIMAL(10,0)" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 14) ==
 SELECT double(decimal('nan'))
               ^^^^^^^^^^^^^^
@@ -845,7 +845,7 @@ SELECT bigint(double('-9223372036854780000'))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value -9.22337203685478E18D of the type "DOUBLE" cannot be cast to 
"BIGINT" due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+The value -9.22337203685478E18D of the type "DOUBLE" cannot be cast to 
"BIGINT" due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/int4.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/int4.sql.out
index 144a01511f2..265aaa7ce2a 100755
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/int4.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/int4.sql.out
@@ -200,7 +200,7 @@ SELECT '' AS five, i.f1, i.f1 * smallint('2') AS x FROM 
INT4_TBL i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 * smallint('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^^^^^^
@@ -223,7 +223,7 @@ SELECT '' AS five, i.f1, i.f1 * int('2') AS x FROM INT4_TBL 
i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 * int('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^
@@ -246,7 +246,7 @@ SELECT '' AS five, i.f1, i.f1 + smallint('2') AS x FROM 
INT4_TBL i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 + smallint('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^^^^^^
@@ -270,7 +270,7 @@ SELECT '' AS five, i.f1, i.f1 + int('2') AS x FROM INT4_TBL 
i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 + int('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^
@@ -294,7 +294,7 @@ SELECT '' AS five, i.f1, i.f1 - smallint('2') AS x FROM 
INT4_TBL i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 - smallint('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^^^^^^
@@ -318,7 +318,7 @@ SELECT '' AS five, i.f1, i.f1 - int('2') AS x FROM INT4_TBL 
i
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-integer overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+integer overflow. If necessary set spark.sql.ansi.enabled to "false" (except 
for ANSI interval type) to bypass this error.
 == SQL(line 1, position 25) ==
 SELECT '' AS five, i.f1, i.f1 - int('2') AS x FROM INT4_TBL i
                          ^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/int8.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/int8.sql.out
index bbdee119a51..7761127d7b5 100755
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/int8.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/int8.sql.out
@@ -392,7 +392,7 @@ SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM 
INT8_TBL
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 == SQL(line 1, position 28) ==
 SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL
                             ^^^^^^^
@@ -575,7 +575,7 @@ select bigint('9223372036854775800') / bigint('0')
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select bigint('9223372036854775800') / bigint('0')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -587,7 +587,7 @@ select bigint('-9223372036854775808') / smallint('0')
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select bigint('-9223372036854775808') / smallint('0')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -599,7 +599,7 @@ select smallint('100') / bigint('0')
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 7) ==
 select smallint('100') / bigint('0')
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -619,7 +619,7 @@ SELECT CAST(q1 AS int) FROM int8_tbl WHERE q2 <> 456
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value 4567890123456789L of the type "BIGINT" cannot be cast to "INT" due 
to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 4567890123456789L of the type "BIGINT" cannot be cast to "INT" due 
to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -636,7 +636,7 @@ SELECT CAST(q1 AS smallint) FROM int8_tbl WHERE q2 <> 456
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value 4567890123456789L of the type "BIGINT" cannot be cast to "SMALLINT" 
due to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 4567890123456789L of the type "BIGINT" cannot be cast to "SMALLINT" 
due to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -673,7 +673,7 @@ SELECT CAST(double('922337203685477580700.0') AS bigint)
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value 9.223372036854776E20D of the type "DOUBLE" cannot be cast to 
"BIGINT" due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to false to bypass this error.
+The value 9.223372036854776E20D of the type "DOUBLE" cannot be cast to 
"BIGINT" due to an overflow. To return NULL instead, use `try_cast`. If 
necessary set "spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -745,7 +745,7 @@ SELECT string(int(shiftleft(bigint(-1), 63))+1)
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-The value -9223372036854775808L of the type "BIGINT" cannot be cast to "INT" 
due to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value -9223372036854775808L of the type "BIGINT" cannot be cast to "INT" 
due to an overflow. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -754,7 +754,7 @@ SELECT bigint((-9223372036854775808)) * bigint((-1))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT bigint((-9223372036854775808)) * bigint((-1))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -782,7 +782,7 @@ SELECT bigint((-9223372036854775808)) * int((-1))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT bigint((-9223372036854775808)) * int((-1))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -810,7 +810,7 @@ SELECT bigint((-9223372036854775808)) * smallint((-1))
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 == SQL(line 1, position 7) ==
 SELECT bigint((-9223372036854775808)) * smallint((-1))
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/select_having.sql.out
 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/select_having.sql.out
index d91adc7ed24..7a9f4ae055c 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/select_having.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/select_having.sql.out
@@ -177,7 +177,7 @@ SELECT 1 AS one FROM test_having WHERE 1/a = 1 HAVING 1 < 2
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 39) ==
 ...1 AS one FROM test_having WHERE 1/a = 1 HAVING 1 < 2
                                    ^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/text.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/text.sql.out
index 39e2603fd26..ed218c1a52c 100755
--- a/sql/core/src/test/resources/sql-tests/results/postgreSQL/text.sql.out
+++ b/sql/core/src/test/resources/sql-tests/results/postgreSQL/text.sql.out
@@ -65,7 +65,7 @@ select string('four: ') || 2+2
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'four: 2' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'four: 2' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select string('four: ') || 2+2
        ^^^^^^^^^^^^^^^^^^^^^^^
@@ -77,7 +77,7 @@ select 'four: ' || 2+2
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'four: 2' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'four: 2' of the type "STRING" cannot be cast to "BIGINT" because it 
is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 1, position 7) ==
 select 'four: ' || 2+2
        ^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part2.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part2.sql.out
index 2c335898d65..58633790cf7 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part2.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part2.sql.out
@@ -225,7 +225,7 @@ from range(9223372036854775804, 9223372036854775807) x
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -235,7 +235,7 @@ from range(-9223372036854775806, -9223372036854775805) x
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-long overflow. If necessary set spark.sql.ansi.enabled to false (except for 
ANSI interval type) to bypass this error.
+long overflow. If necessary set spark.sql.ansi.enabled to "false" (except for 
ANSI interval type) to bypass this error.
 
 
 -- !query
@@ -462,7 +462,7 @@ window w as (order by f_numeric range between
 struct<>
 -- !query output
 org.apache.spark.SparkNumberFormatException
-The value 'NaN' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value 'NaN' of the type "STRING" cannot be cast to "INT" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 3, position 12) ==
 window w as (order by f_numeric range between
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part3.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part3.sql.out
index 418c9973611..68f9d532a1c 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part3.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part3.sql.out
@@ -72,7 +72,7 @@ insert into datetimes values
 struct<>
 -- !query output
 org.apache.spark.sql.AnalysisException
-failed to evaluate expression CAST('11:00 BST' AS TIMESTAMP): The value '11:00 
BST' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+failed to evaluate expression CAST('11:00 BST' AS TIMESTAMP): The value '11:00 
BST' of the type "STRING" cannot be cast to "TIMESTAMP" because it is 
malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 == SQL(line 2, position 23) ==
 (1, timestamp '11:00', cast ('11:00 BST' as timestamp), cast ('1 year' as 
timestamp), ...
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part4.sql.out 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part4.sql.out
index 3e30ad26941..f3f4a448df6 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part4.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/postgreSQL/window_part4.sql.out
@@ -501,7 +501,7 @@ FROM (VALUES(1,1),(2,2),(3,(cast('nan' as 
int))),(4,3),(5,4)) t(a,b)
 struct<>
 -- !query output
 org.apache.spark.sql.AnalysisException
-failed to evaluate expression CAST('nan' AS INT): The value 'nan' of the type 
"STRING" cannot be cast to "INT" because it is malformed. To return NULL 
instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" to false to 
bypass this error.
+failed to evaluate expression CAST('nan' AS INT): The value 'nan' of the type 
"STRING" cannot be cast to "INT" because it is malformed. To return NULL 
instead, use `try_cast`. If necessary set "spark.sql.ansi.enabled" to "false" 
to bypass this error.
 == SQL(line 3, position 28) ==
 FROM (VALUES(1,1),(2,2),(3,(cast('nan' as int))),(4,3),(5,4)) t(a,b)
                             ^^^^^^^^^^^^^^^^^^
diff --git 
a/sql/core/src/test/resources/sql-tests/results/timestampNTZ/timestamp-ansi.sql.out
 
b/sql/core/src/test/resources/sql-tests/results/timestampNTZ/timestamp-ansi.sql.out
index e6f62b679c6..e374f92c74e 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/timestampNTZ/timestamp-ansi.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/timestampNTZ/timestamp-ansi.sql.out
@@ -98,7 +98,7 @@ SELECT make_timestamp(2021, 07, 11, 6, 30, 60.007)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The fraction of sec must be zero. Valid range is [0, 60]. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The fraction of sec must be zero. Valid range is [0, 60]. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
@@ -332,7 +332,7 @@ select to_timestamp(1)
 struct<>
 -- !query output
 org.apache.spark.SparkDateTimeException
-The value '1' of the type "STRING" cannot be cast to "TIMESTAMP_NTZ" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to false to bypass this error.
+The value '1' of the type "STRING" cannot be cast to "TIMESTAMP_NTZ" because 
it is malformed. To return NULL instead, use `try_cast`. If necessary set 
"spark.sql.ansi.enabled" to "false" to bypass this error.
 
 
 -- !query
diff --git 
a/sql/core/src/test/resources/sql-tests/results/udf/postgreSQL/udf-select_having.sql.out
 
b/sql/core/src/test/resources/sql-tests/results/udf/postgreSQL/udf-select_having.sql.out
index dfb287ff023..f16a680cfbc 100644
--- 
a/sql/core/src/test/resources/sql-tests/results/udf/postgreSQL/udf-select_having.sql.out
+++ 
b/sql/core/src/test/resources/sql-tests/results/udf/postgreSQL/udf-select_having.sql.out
@@ -177,7 +177,7 @@ SELECT 1 AS one FROM test_having WHERE 1/udf(a) = 1 HAVING 
1 < 2
 struct<>
 -- !query output
 org.apache.spark.SparkArithmeticException
-Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to false (except for ANSI interval type) to bypass 
this error.
+Division by zero. To return NULL instead, use `try_divide`. If necessary set 
"spark.sql.ansi.enabled" to "false" (except for ANSI interval type) to bypass 
this error.
 == SQL(line 1, position 39) ==
 ...1 AS one FROM test_having WHERE 1/udf(a) = 1 HAVING 1 < 2
                                    ^^^^^^^^
diff --git 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
index 96a29f6dab6..e6ce1d70080 100644
--- 
a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
+++ 
b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala
@@ -181,7 +181,7 @@ class QueryExecutionErrorsSuite extends QueryTest
 
       val format = "Parquet"
       val config = "\"" + SQLConf.PARQUET_REBASE_MODE_IN_READ.key + "\""
-      val option = "datetimeRebaseMode"
+      val option = "\"" + "datetimeRebaseMode" + "\""
       assert(e.getErrorClass === "INCONSISTENT_BEHAVIOR_CROSS_VERSION")
       assert(e.getMessage ===
         "You may get a different result due to the upgrading to Spark >= 3.0: 
" +
@@ -191,10 +191,10 @@ class QueryExecutionErrorsSuite extends QueryTest
           |Spark 2.x or legacy versions of Hive, which uses a legacy hybrid 
calendar
           |that is different from Spark 3.0+'s Proleptic Gregorian calendar.
           |See more details in SPARK-31404. You can set the SQL config $config 
or
-          |the datasource option '$option' to 'LEGACY' to rebase the datetime 
values
+          |the datasource option $option to "LEGACY" to rebase the datetime 
values
           |w.r.t. the calendar difference during reading. To read the datetime 
values
-          |as it is, set the SQL config $config or the datasource option 
'$option'
-          |to 'CORRECTED'.
+          |as it is, set the SQL config $config or the datasource option 
$option
+          |to "CORRECTED".
           |""".stripMargin)
     }
 
@@ -216,9 +216,9 @@ class QueryExecutionErrorsSuite extends QueryTest
             |into $format files can be dangerous, as the files may be read by 
Spark 2.x
             |or legacy versions of Hive later, which uses a legacy hybrid 
calendar that
             |is different from Spark 3.0+'s Proleptic Gregorian calendar. See 
more
-            |details in SPARK-31404. You can set $config to 'LEGACY' to rebase 
the
+            |details in SPARK-31404. You can set $config to "LEGACY" to rebase 
the
             |datetime values w.r.t. the calendar difference during writing, to 
get maximum
-            |interoperability. Or set $config to 'CORRECTED' to write the 
datetime
+            |interoperability. Or set $config to "CORRECTED" to write the 
datetime
             |values as it is, if you are 100% sure that the written files will 
only be read by
             |Spark 3.0+ or other systems that use Proleptic Gregorian calendar.
             |""".stripMargin)


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to