sandeep-katta commented on pull request #28600: URL: https://github.com/apache/spark/pull/28600#issuecomment-632692345
> ``` > org.scalatest.exceptions.TestFailedException: "struct<([3 div 2]):bigint>" did not equal "struct<([CAST(3 AS BIGINT) div CAST(2 AS BIGINT)]):bigint>" Schema did not match for query SELECT 3 div 2 > ``` > > OK so the problem here is we change the auto-generated column name of `3 div 2`, because we add cast. > > It's not a big deal, we can just update all the tests. But I'm thinking if it's better to embed the cast in the `IntegralDivide` itself. e.g. > > ``` > case class IntegralDivide(...) { > private lazy val div: (Any, Any) => Any = { > val integral = left.dataType match { > case _: IntegralType => > LongType.integral.asInstanceOf[Integral[Any]] // see the change in this line > case d: DecimalType => > d.asIntegral.asInstanceOf[Integral[Any]] > } > (x, y) => { > val res = integral.quot(x, y) > if (res == null) { > null > } else { > integral.asInstanceOf[Integral[Any]].toLong(res) > } > } > } > > override def operationCode(v1: String, v2: String): String = { > if (decimalType) ... else { > s"((long) $v1) / $v2" > } > } > } > ``` > > This is probably more efficient as there are less expressions this was my first approach, now I am updating the golden files. Let me know which way we need to do, accordingly I will update the code ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org --------------------------------------------------------------------- To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org For additional commands, e-mail: reviews-h...@spark.apache.org