MaxGekk commented on code in PR #51383:
URL: https://github.com/apache/spark/pull/51383#discussion_r2190347357
##########
sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/util/DateTimeUtils.scala:
##########
@@ -834,4 +834,31 @@ object DateTimeUtils extends SparkDateTimeUtils {
def makeTimestampNTZ(days: Int, nanos: Long): Long = {
localDateTimeToMicros(LocalDateTime.of(daysToLocalDate(days),
nanosToLocalTime(nanos)))
}
+
+ /**
+ * Adds a day-time interval to a time.
+ *
+ * @param time A time in nanoseconds.
+ * @param timePrecision The number of digits of the fraction part of time.
+ * @param interval A day-time interval in microseconds.
+ * @param intervalEndField The rightmost field which the interval comprises
of.
+ * Valid values: 0 (DAY), 1 (HOUR), 2 (MINUTE), 3
(SECOND).
+ * @param targetPrecision The number of digits of the fraction part of the
resulting time.
+ * @return A time value in nanoseconds or throw an arithmetic overflow
+ * if the result out of valid time range [00:00, 24:00).
Review Comment:
> so if the day-time interval has the day field, it will always overflow?
Not always. Values in the day field can be 0. In that case, it couldn't
overflow.
> Shall we check the start field of day-time internal at the analysis time
to make sure it's not DAY?
Maybe, do you mean the end field? And prohibit `DayTimeIntervalType(DAY,
DAY)`? Even with that type, users might construct valid expressions like
`TIME'12:30' + INTERVAL '0' DAY`. The SQL standard says nothing about that case
(@srielau Am I right?). @dongjoon-hyun @yaooqinn @cloud-fan WDYT, should we
disallow such day intervals?
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]