ksbeyer commented on code in PR #51466:
URL: https://github.com/apache/spark/pull/51466#discussion_r2237931947


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/TableOutputResolver.scala:
##########
@@ -132,7 +132,11 @@ object TableOutputResolver extends SQLConfHelper with 
Logging {
       case (valueType, colType) if 
DataType.equalsIgnoreCompatibleNullability(valueType, colType) =>
         val canWriteExpr = canWrite(
           tableName, valueType, colType, byName = true, conf, addError, 
colPath)
-        if (canWriteExpr) checkNullability(value, col, conf, colPath) else 
value
+        if (canWriteExpr) {
+          applyColumnMetadata(checkNullability(value, col, conf, colPath), col)

Review Comment:
   Correct.  
   
   I'd prefer to remove the table metadata from the query attributes instead, 
but there was concern about the risk to all the connectors out there that might 
be using the metadata.  
   
   So this change will make the table metadata _always_ propagate to the query 
attributes and _only_ the table attributes (removing the source attributes that 
leaked through sometimes).



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to