I am writing ORC binaries in Java and they deserialize perfectly with the
Apache ORC jar on the docs that I've used to validate the data. The schemas
looks good, etc.

When reading this information via Spark, we are encountering failures - in
particular

mismatched input '<' expecting '>'(line 1, pos 6569)
taxPercent:uniontype<int,float>,

Does Spark support uniontypes like this? Just curious what some
plausible work arounds could be.
Thanks.

Reply via email to