amogh-jahagirdar commented on code in PR #9556:
URL: https://github.com/apache/iceberg/pull/9556#discussion_r1469971672
##########
spark/v3.4/spark-extensions/src/main/scala/org/apache/spark/sql/catalyst/analysis/RewriteMergeIntoTable.scala:
##########
@@ -214,6 +214,8 @@ object RewriteMergeIntoTable extends
RewriteRowLevelIcebergCommand with Predicat
val rowFromSourceAttr = resolveAttrRef(ROW_FROM_SOURCE_REF, joinPlan)
val rowFromTargetAttr = resolveAttrRef(ROW_FROM_TARGET_REF, joinPlan)
+ // The output expression should retain read attributes for correctly
determining nullability
+ val matchedOutputsWithAttrs = matchedActions.map(matchedActionOutput(_,
metadataAttrs) :+ readAttrs)
Review Comment:
Thanks @rdblue appreciate the detailed thread with the reasoning, I followed
along and it makes sense. I agree that adding the readAttrs to the
notMatchedOutput makes more sense for having a `MergeRows` logical plan which
is logically sound compared to the previous approach which more forces the
nullability check to pass just by passing the attributes along with the matched
output list. I've updated the PR. @aokolnychyi let us know what you think.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]