himadripal commented on code in PR #1385:
URL: https://github.com/apache/datafusion-comet/pull/1385#discussion_r1987983268
##########
spark/src/test/scala/org/apache/comet/CometCastSuite.scala:
##########
@@ -1210,27 +1213,36 @@ class CometCastSuite extends CometTestBase with
AdaptiveSparkPlanHelper {
val cometMessage =
if (cometException.getCause != null)
cometException.getCause.getMessage
else cometException.getMessage
- if (CometSparkSessionExtensions.isSpark40Plus) {
- // for Spark 4 we expect to sparkException carries the message
+ // for comet decimal conversion throws ArrowError(string) from
arrow - across spark versions the message dont match.
+ if (df.schema("a").dataType.typeName.contains("decimal") &&
toType.typeName
+ .contains("decimal")) {
assert(
- sparkException.getMessage
- .replace(".WITH_SUGGESTION] ", "]")
- .startsWith(cometMessage))
- } else if (CometSparkSessionExtensions.isSpark34Plus) {
- // for Spark 3.4 we expect to reproduce the error message
exactly
- assert(cometMessage == sparkMessage)
+ cometMessage.contains("too large to store"),
+ sparkMessage.contains("cannot be represented as"))
Review Comment:
my though was `false == false` is valid use case, in case some other
exception happened in spark other than "can not be represented", then comet
should also throw some other exception, not "too large to store". I can
probably make it explicit by putting the assert inside if.
```scala
if (sparkMessage.contains("cannot be represented as") {
assert(cometMessage.contains("too large to store"))
}
```
WDYT?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]