[ 
https://issues.apache.org/jira/browse/SPARK-31074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17054859#comment-17054859
 ] 

Kyrill Alyoshin edited comment on SPARK-31074 at 3/9/20, 11:57 AM:
-------------------------------------------------------------------

The first issue was about controlling _nullability_ in Spark schema generated 
through the bean encoder. This issue is about allowing nullable Spark schema 
fields to be written to an Avro schema where they are declared as _non null_. 
)Of course, we assume that Spark's values will never actually going to be 
_null_.)

This first issue is rather narrow and applies to Java bean encoder only. This 
issue applies to all nullable columns in Spark schema. I mean, the column can 
be _nullable_ just because the datasource returned it as such (without any 
encoders).

There is a subtle difference here, but the issues are related.


was (Author: kyrill007):
The first issue was about controlling _nullability_ in Spark schema. This issue 
is about allowing nullable Spark schema fields to be written to an Avro schema 
where they are declared as _non null_. Of course, we assume that Spark's values 
will never actually going to be _null_. There is a subtle difference here, but 
the issues are related.

> Avro serializer should not fail when a nullable Spark field is written to a 
> non-null Avro column
> ------------------------------------------------------------------------------------------------
>
>                 Key: SPARK-31074
>                 URL: https://issues.apache.org/jira/browse/SPARK-31074
>             Project: Spark
>          Issue Type: Improvement
>          Components: Spark Core
>    Affects Versions: 2.4.4
>            Reporter: Kyrill Alyoshin
>            Priority: Major
>
> Spark StructType schema are strongly biased towards having _nullable_ fields. 
> In fact, this is what _Encoders.bean()_ does - any non-primitive field is 
> automatically _nullable_. When we attempt to serialize dataframes into 
> *user-supplied* Avro schemas where such corresponding fields are marked as 
> _non-null_ (i.e., they are not of _union_ type) any such attempt will fail 
> with the following exception
>  
> {code:java}
> Caused by: org.apache.avro.AvroRuntimeException: Not a union: "string"
>       at org.apache.avro.Schema.getTypes(Schema.java:299)
>       at 
> org.apache.spark.sql.avro.AvroSerializer.org$apache$spark$sql$avro$AvroSerializer$$resolveNullableType(AvroSerializer.scala:229)
>       at 
> org.apache.spark.sql.avro.AvroSerializer$$anonfun$3.apply(AvroSerializer.scala:209)
>  {code}
> This seems as rather draconian. We certainly should be able to write a field 
> of the same type and with the same name if it is not a null into a 
> non-nullable Avro column. In fact, the problem is so *severe* that it is not 
> clear what should be done in such situations when Avro schema is given to you 
> as part of API communication contract (i.e., it is non-changeable).
> This is an important issue.
>  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscr...@spark.apache.org
For additional commands, e-mail: issues-h...@spark.apache.org

Reply via email to