This is an automated email from the ASF dual-hosted git repository.

maxgekk pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new ee21b12c395 [MINOR][SQL] Remove duplicate cases of escaping characters 
in string literals
ee21b12c395 is described below

commit ee21b12c395ac184c8ddc2f74b66f6e6285de5fa
Author: Max Gekk <max.g...@gmail.com>
AuthorDate: Thu Sep 28 21:18:40 2023 +0300

    [MINOR][SQL] Remove duplicate cases of escaping characters in string 
literals
    
    ### What changes were proposed in this pull request?
    In the PR, I propose to remove some cases in `appendEscapedChar()` because 
they fall to the default case.
    
    The following tests check the cases:
    - 
https://github.com/apache/spark/blob/187e9a851758c0e9cec11edab2bc07d6f4404001/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ParserUtilsSuite.scala#L97-L98
    - 
https://github.com/apache/spark/blob/187e9a851758c0e9cec11edab2bc07d6f4404001/sql/catalyst/src/test/scala/org/apache/spark/sql/catalyst/parser/ParserUtilsSuite.scala#L104
    
    ### Why are the changes needed?
    To improve code maintainability.
    
    ### Does this PR introduce _any_ user-facing change?
    No.
    
    ### How was this patch tested?
    By running the affected test suite:
    ```
    $ build/sbt "test:testOnly *.ParserUtilsSuite"
    ```
    
    ### Was this patch authored or co-authored using generative AI tooling?
    No.
    
    Closes #43170 from MaxGekk/cleanup-escaping.
    
    Authored-by: Max Gekk <max.g...@gmail.com>
    Signed-off-by: Max Gekk <max.g...@gmail.com>
---
 .../scala/org/apache/spark/sql/catalyst/util/SparkParserUtils.scala    | 3 ---
 1 file changed, 3 deletions(-)

diff --git 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkParserUtils.scala
 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkParserUtils.scala
index c318f208255..a4ce5fb1203 100644
--- 
a/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkParserUtils.scala
+++ 
b/sql/api/src/main/scala/org/apache/spark/sql/catalyst/util/SparkParserUtils.scala
@@ -38,14 +38,11 @@ trait SparkParserUtils {
     def appendEscapedChar(n: Char): Unit = {
       n match {
         case '0' => sb.append('\u0000')
-        case '\'' => sb.append('\'')
-        case '"' => sb.append('\"')
         case 'b' => sb.append('\b')
         case 'n' => sb.append('\n')
         case 'r' => sb.append('\r')
         case 't' => sb.append('\t')
         case 'Z' => sb.append('\u001A')
-        case '\\' => sb.append('\\')
         // The following 2 lines are exactly what MySQL does TODO: why do we 
do this?
         case '%' => sb.append("\\%")
         case '_' => sb.append("\\_")


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@spark.apache.org
For additional commands, e-mail: commits-h...@spark.apache.org

Reply via email to