Egor Pahomov created SPARK-16548:
------------------------------------

             Summary: java.io.CharConversionException: Invalid UTF-32 character 
 prevents me from querying my data
                 Key: SPARK-16548
                 URL: https://issues.apache.org/jira/browse/SPARK-16548
             Project: Spark
          Issue Type: Bug
          Components: SQL
    Affects Versions: 1.6.1
            Reporter: Egor Pahomov
            Priority: Minor


Basically, when I query my json data I get 
{code}
java.io.CharConversionException: Invalid UTF-32 character 0x7b2265(above 
10ffff)  at char #192, byte #771)
        at 
com.fasterxml.jackson.core.io.UTF32Reader.reportInvalid(UTF32Reader.java:189)
        at com.fasterxml.jackson.core.io.UTF32Reader.read(UTF32Reader.java:150)
        at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.loadMore(ReaderBasedJsonParser.java:153)
        at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser._skipWSOrEnd(ReaderBasedJsonParser.java:1855)
        at 
com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:571)
        at 
org.apache.spark.sql.catalyst.expressions.GetJsonObject$$anonfun$eval$2$$anonfun$4.apply(jsonExpressions.scala:142)
{code}

I do not like it. If you can not process one json among 100500 please return 
null, do not fail everything. I have dirty one line fix, and I understand how I 
can make it more reasonable. What is our position - what behaviour we wanna get?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to