Re: [DISCUSS] upper/lower of special characters

2018-09-21 Thread seancxmao






Hi, SeanAfter brief investigation, I found there are some tickets/PRs about this issue. I just didn't know that. https://issues.apache.org/jira/browse/SPARK-20156https://github.com/apache/spark/pull/17527https://github.com/apache/spark/pull/17655
 
I have carefully read the discussions, really learned a lot. I totally agree with the idea:> I could see backing out changes that affect user application strings, to be conservative. We could decide to change that later. The issue here really stems from lowercasing of purely internal strings.As for lower/upper functions, Spark has same bahavior with Hive.select upper("i"), lower("İ"), upper("ı"), lower("I");--Spark      I, i with dot, I, iHive       I, i with dot, I, iThanks,Chenxiao Mao
On 09/19/2018 18:35,Sean Owen wrote: 


I don't have the details in front of me, but I recall we explicitly overhauled locale-sensitive toUpper and toLower in the code for this exact situation. The current behavior should be on purpose. I believe user data strings are handled in a case sensitive way but things like reserved words in SQL are not of course. The Spark behavior is most correct and consistent with Hive, right?On Wed, Sep 19, 2018, 1:14 AM seancxmao <seancx...@gmail.com> wrote:







Hi, allWe found that there are some differences about case handling of special characters between Spark and other database systems. You may see blow list for an example (you may also check attached pictures)select upper("i"), lower("İ"), upper("ı"), lower("I");--Spark      I, i with dot, I, iHive       I, i with dot, I, iTeradata   I, i,          I, iOracle     I, i,          I, iSQLServer  I, i,          I, iMySQL      I, i,          I, i"İ" and "ı" are Turkish characters. If locale-sensitive case handling is used, the expected results of above upper/lower functions should be:select upper("i"), lower("İ"), upper("ı"), lower("I");--İ, i, I, ıBut, it seems that these systems all do local-insensitive mapping. Presto explicitly describe this as a known issue in their docs (https://prestodb.io/docs/current/functions/string.html)> The lower() and upper() functions do not perform locale-sensitive, context-sensitive, or one-to-many mappings required for some languages. Specifically, this will return incorrect results for Lithuanian, Turkish and Azeri.Java besed systems have same behaviors since they all depend on the same JDK String methods. Teradata/Oracle/SQLServer/MySQL also have same behaviors. However Java based systems return different results for lower("İ"). Java based systems (Spark/Hive) return "i with dot" while other database systems(Teradata/Oracle/SQLServer/MySQL) return "i".My questions:(1) Should we let Spark return "i" for lower("İ"), which is same as other database systems?(2) Should Spark support locale-sensitive upper/lower functions? Because row of a table may need different locales, we cannot even set locale at table level. What we might do is to provide upper(string, locale)/lower(string, locale), and let users decide what locale they want to use.Some references below. Just FYI.* https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toLowerCase-java.util.Locale-* https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toUpperCase-java.util.Locale-* http://grepalex.com/2013/02/14/java-7-and-the-dotted--and-dotless-i/* https://stackoverflow.com/questions/3322152/is-there-a-way-to-get-rid-of-accents-and-convert-a-whole-string-to-regular-letteYour comments and advices are highly appreciated.Many thanks!Chenxiao Mao (Sean)




-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org





Re: [DISCUSS] upper/lower of special characters

2018-09-21 Thread seancxmao






Hi, RaynoldSorry for slow response. Thanks for your suggestion. I'd like to document this in the API docs - SQL built-in functions. BTW, this is a real case we met in production, the Turkish data is from other systems through ETL. As what you mentioned, we use UDFs to avoid issues. E.g. for the special Turkish character "İ"(u+0130), we firstprocess by regexp_replace(c,'İ','I') before further processing.Thanks,Chenxiao Mao (Sean)

On 09/19/2018 14:18,Reynold Xin wrote: 


I'd just document it as a known limitation and move on for now, until there are enough end users that need this. Spark is also very powerful with UDFs and end users can easily work around this using UDFs.--excuse the brevity and lower case due to wrist injuryOn Tue, Sep 18, 2018 at 11:14 PM seancxmao <seancx...@gmail.com> wrote:







Hi, allWe found that there are some differences about case handling of special characters between Spark and other database systems. You may see blow list for an example (you may also check attached pictures)select upper("i"), lower("İ"), upper("ı"), lower("I");--Spark      I, i with dot, I, iHive       I, i with dot, I, iTeradata   I, i,          I, iOracle     I, i,          I, iSQLServer  I, i,          I, iMySQL      I, i,          I, i"İ" and "ı" are Turkish characters. If locale-sensitive case handling is used, the expected results of above upper/lower functions should be:select upper("i"), lower("İ"), upper("ı"), lower("I");--İ, i, I, ıBut, it seems that these systems all do local-insensitive mapping. Presto explicitly describe this as a known issue in their docs (https://prestodb.io/docs/current/functions/string.html)> The lower() and upper() functions do not perform locale-sensitive, context-sensitive, or one-to-many mappings required for some languages. Specifically, this will return incorrect results for Lithuanian, Turkish and Azeri.Java besed systems have same behaviors since they all depend on the same JDK String methods. Teradata/Oracle/SQLServer/MySQL also have same behaviors. However Java based systems return different results for lower("İ"). Java based systems (Spark/Hive) return "i with dot" while other database systems(Teradata/Oracle/SQLServer/MySQL) return "i".My questions:(1) Should we let Spark return "i" for lower("İ"), which is same as other database systems?(2) Should Spark support locale-sensitive upper/lower functions? Because row of a table may need different locales, we cannot even set locale at table level. What we might do is to provide upper(string, locale)/lower(string, locale), and let users decide what locale they want to use.Some references below. Just FYI.* https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toLowerCase-java.util.Locale-* https://docs.oracle.com/javase/8/docs/api/java/lang/String.html#toUpperCase-java.util.Locale-* http://grepalex.com/2013/02/14/java-7-and-the-dotted--and-dotless-i/* https://stackoverflow.com/questions/3322152/is-there-a-way-to-get-rid-of-accents-and-convert-a-whole-string-to-regular-letteYour comments and advices are highly appreciated.Many thanks!Chenxiao Mao (Sean)




-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org