> double (or even better long double) would be a better underlying
> type for time_t than long long.

If you believe strongly in this idea, you should take an entire
operating system base and prove the case.  By converting the entire
base.  By showing that it will work.  By getting X and firefox running
on it.  By fixing NTP, DNS, and everything else.

> Programs that are using time_t properly would not notice the
> difference. Programs that very incorrect would get complete garbage
> for a result, and thus be easier to notice and correct.

And through this processs we will reach time_t utopia?


> Using double for time_t would allow a time_t value to be used as a
> time stamp for events separated by milliseconds. Using long double
> for time_t would allow time_t to be used as time stamps to record
> time starts and finish crossing an atom. I am sure the CERN would
> like it.
>
> It time_t is a double. It also makes sense for clock_t to be a
> double in the same units.

So in that case, you should start by converting a entire OS source
tree.

Without that step, your assertion lacks essential value, because the
benefits may be significantly minor compared to the difficulties on
the way.

I recommend you a lot of luck inside the kernel, because it cannot do
floating point.  And time_t math is done inside the kernel (what are
the odds..)  See, the kernels avoid doing floating point because the
floating point registers contain values for the userland contexts.
You could undo this "feature", and then argue that the performance
losses are irrelevant.

You will also have great fun in the various DNS related codebases.

Good luck!

Reply via email to