Ryan, thanks a lot for the context.

Many cases use Java's "System.currentTimeMillis()" to generate the epoch
timestamp. Just trying to see if we can avoid the conversion from
milli-seconds to microseconds. That is the only reason I am asking.

On Tue, Mar 2, 2021 at 5:42 PM Ryan Blue <rb...@netflix.com.invalid> wrote:

> The reason why we only support microsecond is that it is the precision
> required by the SQL spec. It's also the one that works for most use cases.
> Some people need nanos, but micros works the majority of the time. I don't
> think that it is a good idea to support millis because that can be
> represented in micros and it's easier to implement the spec if we don't
> allow customization on too many dimensions.
>
> Is there a specific reason to support millis instead of micros? I think it
> is unlikely that the range of values isn't large enough?
>
> On Mon, Mar 1, 2021 at 10:34 PM Steven Wu <stevenz...@gmail.com> wrote:
>
>>
>> Right now, Iceberg timestamp type only supports microseconds precision.
>> Support for other precision like milli-seconds can be useful, as it is
>> pretty commonly used. If we want to use the hidden partitioning (date or
>> hour) on a timestamp field with milli-seconds precision, now we have to
>> convert the value to microseconds first.
>>
>> Any reason why microseconds only? have we considered supporting
>> milli-seconds precision?
>>
>> Thanks,
>> Steven
>>
>
>
> --
> Ryan Blue
> Software Engineer
> Netflix
>

Reply via email to