Github user jodersky commented on the issue:

    https://github.com/apache/spark/pull/15398
  
    I understand your comment about the weird escaping behaviour, @mengxr. 
Putting myself into the shoes of a new user, I would be least surprised if 
Spark were to treat the String verbatim (as in "anything after a backslash is 
treated literally") or give me an error and tell me that my escape sequence 
doesn't make sense. Changing the parsing of the escape character based on what 
follows also makes the code more complicated.
    
    However, I also understand the high priority of making migration from Hive 
to Spark SQL most straightforward. Therefore, I concluded that existing queries 
should behave the same, and reimplemented Hive's 'like' pattern matching. Maybe 
this wasn't the right trade-off for non Hive users?
    
    I was unable to find any docs that detail the escaping behaviour in the 
scenarios mentioned above, however I basically followed the implementation 
thanks to the [link Simon 
provided](https://github.com/apache/hive/blob/ff67cdda1c538dc65087878eeba3e165cf3230f4/ql/src/java/org/apache/hadoop/hive/ql/udf/UDFLike.java#L64).
    
    @rxin, do you think this case warrants diverging from Hive?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to