Hi,
I'm having problems in converting a decimal data from Spark SQL to Avro.
I'm try to write an Avro file with such schema for the decimal field from a
spark application:

{ "name":"num", "type": ["null", {"type": "bytes", "logicalType":
"decimal", "precision": 3, "scale": 1}], "doc":"durata" }

Then, I convert the number from string to BigDecimal, set the scale value
and convert it to a byte array, then writing the avro file.
But when I try to query the data inserted (both with hive and spark SQL) I
have such error:


Caused by: org.apache.hadoop.hive.serde2.avro.AvroSerdeException: Failed to
obtain scale value from file schema: "bytes"
       ...
Caused by: java.lang.NullPointerException

Is the error caused by a wrong serialization of the avro file or what? Have
someone encountered such issue? Have some suggestions? Thanks,

Ernesto

Reply via email to