I will say the very same thing Ryan did several weeks [months?] ago.
Where were you for the last two years?

Complaining about how fucked up the design decisions were for apr_time_t. Its in the archives. People didn't want to deal with it before due to more pressing concerns. 2.0 is now out, so there are no more excuses.

  The questions on the table in APR is;

1. are we the special interest of httpd alone? Are we their sub-project?

Irrelevant. If you want httpd to use APR, then it had better not make httpd
worse for no good reason. If there is a reason, then I want it documented
in the code. If not, if it is just the whim of some folks using APR, then
I will fork the httpd project away from APR. Nobody else has to follow the
redesign, but then at least I'll be satisfied in trying. With the
binary microseconds change we have finally tackled the performance
concern to at least a level of reasonable trade-off, albeit undocumented.
Now we just need to solve the problem of all of the inconsistent code
due to people assuming (or never fixing) apr_time_t == time_t.


2. was this already thought out and debated, and this design won?

No. It was put off until we were willing to tackle performance issues. That time is now.

3. Is a veto of long existing code within APR allowed by protocol?

If there is a good reason, yes.

    4. What is an appropriate name?  This A. depends on

a. Should the usec or busec be a strong contract with the programmer?

      b. Is it the only time representation internal across APR?

  My answers?

1. No.
2. Yes
3. No
4. busec or butime, with a strong contract, and the only representation.

Were there bugs moving to APR? Yes. Was sec v.s. finer granularity
a significant debate? Yes. Debated over and over. Is apr_time_t a therefore
misleading identifier? Probably, I concede that point to you and aaron.

It isn't a debate. It is a point of fact that it causes bugs to be introduced and reintroduced, over and over again.

We are not debating if we will revert to the old time_t values. That was said
and done way too long ago, and if you choose not to be a party to that whole
bit, or register your vetos then, it is WAY to late. The code exists, now you
can put a whole-other vote to the list. Fine. Don't commingle your temper
tantrum with a worthwhile vote on a significant issue. Keep to the Question.

The way you phrased the question was so hopelessly biased that I felt it necessary to respond with content. I got tired of the changing question and put the rationale instead. If you don't agree with my rationale, then please explain why applications that only use second resolutions should be required to operate on microseconds. Furthermore, given that we can't provide actual microsecond resolution in a cross-platform way, isn't it doing harm to suggest to the application that we can?

I'm not about to rip out all your off-topic comments, although I'm sure others
are more than happy to. Can we trim it down to the question and please
leave the tm structs out of this specific vote? I'll address those questions here,
not in STATUS; feel free to add another [seperate] vote:

The tm struct is the way all of the operating systems have decided this question in the past. That balances the needs of second-based time calls with microsecond-based time calls. That's why the OS people define their own structures that way. busecs are another solution, one that will be better for 64bit machines but worse for 32bit *shrug*, but we still don't provide all of the routines needed by clients to manipulate these things, and we still have no answer for apr_interval_time.

fielding    2002/07/11 17:30:59

         [wrowe: deltas require NO definition of the scale.]

+ [fielding: That's nonsense. What does overflow mean? What are you
+ going to do when you print? How do you interface with other library
+ routines? Scale always matters for scalars.]

Not if they are assured to contain enough precision, which apr_time_t does.
Not for simple deltas of start and end times, which is what matters most.


But I'm in favor of a strongly defined contract for scale with the user,
although others on the list differ.

a dead horse... compositing and breaking apart for each simple deltas
(the most common case) is too costly. Scalars are the only clean
answer - and you do not need to know scale to do addition/subtraction.]
+
+ [fielding: Dean argued that in general. I argue that httpd never
+ does time arithmetic other than in seconds and second-comparisons.
+ Microseconds are therefore harmful to httpd.]

httpd is but one client, and they have participated since day one. There is a
tradeoff, and the httpd-2.0 developers were willing to make that trade. End of
debate, it's been said and done long ago.


And you are choosing to ignore third party modules entirely, I note.

No, it is the third party modules that are still using apr_time_t as seconds.


               in addition to seconds.  Why do you want to throw away the
               microseconds?!!]

  +              [fielding: Sorry, I missed them:
  +                  86 calls to apr_time_now()
  +                  32 calls to time()
  +               +1 to making time consistent.]

++1 and amen to killing time() calls.

+ [fielding: 1. POSIX requires it to be long, so largest native int.
+ 2. Microsoft claims otherwise, but it is still vaporware anyway.
+ 3. POSIX always stores them as separate integers.
+ 4. Benchmarks are meaningless unless they average over hundreds
+ of requests, which requires double floats (not time intervals).
+ 5. +1 for using struct tm everywhere.
+ 6. No.]

The long type on Win64 is also 32 bits. Yes, it's bogus, and I'm not even
going to try to defend it. If you don't want to code to it, fine, but APR will
support it.


Benchmarks don't need to perform every calculation as long floats, especially
while collecting data [during timing, prior to processing.] Your argument
doesn't hold, and several folks have already raised other examples besides
benchmark deltas.

No, time on computers doesn't work that way. You can't add a whole bunch
of individual time calls and expect to have a reasonable answer down to
the microseconds -- the OS does not promise to update the microsecond
field every microsecond (it isn't even available on win32). The microsecond
field is worthless outside of the kernel networking routines. flood and
ab need to take start and stop times, subtract to get an interval, and
divide by number of repetitions. Hence, they are not examples.


And finally, veto on struct tm.  For all the well reasoned arguments Dean
made two years ago.  But add a new vote if you like, separate from the
naming debate.

They are the same debate until apr_time_t's name is changed. They only become separate issues after that has been done.

....Roy



Reply via email to