Github user fuqiliang commented on the issue:
https://github.com/apache/spark/pull/20666
Hi, Thanks for help.
And do we have follow-up stories on the data loss in spark2.2?
I have tried to use
`sql.read.format("my.spark.sql.execution.datasources.json&q
Github user fuqiliang commented on the issue:
https://github.com/apache/spark/pull/20666
for specify, the json file (Sanity4.json) is
`{"a":"a1","int":1,"other":4.4}
{"a":"a2","int":"","oth
Github user fuqiliang commented on the issue:
https://github.com/apache/spark/pull/20666
Hi, guys, I am a spark user.
I have a question for this "JSON doesn't support partial results for
corrupted records." behavior.
In spark 1.6, the partial results is given