clownxc commented on code in PR #40707:
URL: https://github.com/apache/spark/pull/40707#discussion_r1161368201


##########
sql/catalyst/src/main/scala/org/apache/spark/sql/errors/QueryExecutionErrors.scala:
##########
@@ -2795,7 +2795,9 @@ private[sql] object QueryExecutionErrors extends 
QueryErrorsBase {
   def nullPointException(errMsg: String): SparkUserException = {
     new SparkUserException(
       errorClass = "_LEGACY_ERROR_TEMP_3044",
-      messageParameters = Map("field" -> errMsg)
-    )
+      messageParameters = Map(
+        "field" -> errMsg
+      ),
+      cause = new NullPointerException)

Review Comment:
   > I have not followed the error classes changes much - but this is counter 
intuitive - why are we not passing the actual exception here ? Instead of 
creating a dummy exception ?
   
   
   Thank you very much for your review, I have modified the code, can you 
re-review the code when you are free, and make some comments.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to