IMHO that’s not a good comparison. By that logic we shouldn’t have double 
because it’s slower than int.
We should compare against the competition first.

Maybe as part of this effort we’ll need to prototype two competing solutions.

The vast majority of differences should be related to storage cost. Few 
arithmetic operations will feel it.
After all there not many arithmetic operations defined on timestamp to begin 
with.


On Mar 17, 2025 at 3:03 PM -0700, Reynold Xin <[email protected]>, wrote:
Pretty much anything (say vs current timestamp operations in Spark).

On Mon, Mar 17, 2025 at 2:51 PM serge rielau.com<http://rielau.com> 
<[email protected]<mailto:[email protected]>> wrote:
What are you comparing performance against?
On Mar 17, 2025 at 11:54 AM -0700, Reynold Xin <[email protected]>, 
wrote:
Any thoughts on how to deal with performance here? Initially we didn't do nano 
level precision because of performance (would not be able to fit everything 
into a 64 bit int).

On Mon, Mar 17, 2025 at 11:34 AM Sakthi 
<[email protected]<mailto:[email protected]>> wrote:
+1 (non-binding)

On Mon, Mar 17, 2025 at 11:32 AM Zhou Jiang 
<[email protected]<mailto:[email protected]>> wrote:
+1 for the nanosecond support


> On Mar 16, 2025, at 16:03, Dongjoon Hyun 
> <[email protected]<mailto:[email protected]>> wrote:
>
> +1 for supporting NanoSecond Timestamps.
>
> Thank you, Qi.
>
> Dongjoon.
>
> ---------------------------------------------------------------------
> To unsubscribe e-mail: 
> [email protected]<mailto:[email protected]>
>

---------------------------------------------------------------------
To unsubscribe e-mail: 
[email protected]<mailto:[email protected]>

Reply via email to