Maxim, thanks for your replay.
I've left comment in the following jira issue
https://issues.apache.org/jira/browse/SPARK-23194?focusedCommentId=16582025=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-16582025
--
Sent from:
Hello community,
I can not manage to run from_json method with "columnNameOfCorruptRecord"
option.
```
import org.apache.spark.sql.functions._
val data = Seq(
"{'number': 1}",
"{'number': }"
)
val schema = new StructType()
.add($"number".int)
Hello,
We have the same issue,
We use latest release 2.0.2.
Setup with 1.6.1 works fine.
Could somebody provide a workaround how to fix that?
Kind regards,
Denis
--
View this message in context:
Hello community,
We've a challenge and no ideas how to solve it.
The problem,
Say we have the following environment:
1. `cluster A`, the cluster does not use kerberos and we use it as a source
of data, important thing is - we don't manage this cluster.
2. `cluster B`, small cluster where our