Bill Stoddard wrote:
Work up a patch to the apr_time macros and I'll benchmark it. This should be
an easy and non intrusive thing to do.
I tried to create a patch yesterday, but no luck. Given the
wide range of values over which the macro has to work, I couldn't
come up with with a macro that was
ject: approximating division by a million Re: Why not POSIX time_t?
>
>
> Building upon Cliff's formulas, here's another idea
> for doing faster conversions of the current apr_time_t
> format to seconds:
>
> What we want is t/100
>
> What we can do easily i
Ben Laurie wrote:
Brian Pane wrote:
Building upon Cliff's formulas, here's another idea
for doing faster conversions of the current apr_time_t
format to seconds:
What we want is t/100
What we can do easily is t/1048576
But what can we add to t/1048576 to approximate t/100?
If I solve for 'C
Brian Pane wrote:
Building upon Cliff's formulas, here's another idea
for doing faster conversions of the current apr_time_t
format to seconds:
What we want is t/100
What we can do easily is t/1048576
But what can we add to t/1048576 to approximate t/100?
If I solve for 'C' in
t/100 =
Cliff Woolley wrote:
On Mon, 15 Jul 2002, Brian Pane wrote:
(seconds << 20) + microseconds
Yeah, but addition and subtraction of the resulting scalars would require
just as many carry/underflow checks as a structure would...
You can rely on normal scalar arithmetic to handle the carry.
T
Cliff Woolley wrote:
On Mon, 15 Jul 2002, Cliff Woolley wrote:
realsecs = 22/21 * (realusecs >> 20) + 22/21;
realsecs = 44/21 * (realusecs >> 20);
I'm obviously on crack. :)
That last line is of course totally wrong and shouldn't have been there.
:)
So what we really end up with is this
Building upon Cliff's formulas, here's another idea
for doing faster conversions of the current apr_time_t
format to seconds:
What we want is t/100
What we can do easily is t/1048576
But what can we add to t/1048576 to approximate t/100?
If I solve for 'C' in
t/100 = t/1048576 + t/C
I
On Mon, Jul 15, 2002 at 03:48:55PM -0400, Cliff Woolley wrote:
> So it's something close to:
>
> realsecs = ((realusecs >> 20) + (realsecs/22) + 1);
>
> Which simplifies out as:
>
> realsecs = 22/21 * ((realusecs >> 20) + 1);
> realsecs = 22/21 * (realusecs >> 20) + 22/21;
> realsecs = 44/21 * (
At 02:15 PM 7/15/2002, Brian Pane wrote:
Bill Stoddard wrote:
Cliff demonstrates that there is significant loss of accuracy using just a
shift. I believe (faith? :-) that there is a simple solution to this. Don't
know what it is just now though...
I think the simple solution is to pack the data int
Cliff Woolley wrote:
On Mon, 15 Jul 2002, Brian Pane wrote:
seconds = (t >> 20) + (t >> 24)
That probably isn't accurate enough, but you get the basic idea:
sum a couple of t/(2^n) terms to approximate t/100.
What do you think?
Sounds like the right idea. But I'm still not sure this i
On Mon, 15 Jul 2002, Brian Pane wrote:
> seconds = (t >> 20) + (t >> 24)
>
> That probably isn't accurate enough, but you get the basic idea:
> sum a couple of t/(2^n) terms to approximate t/100.
>
> What do you think?
Sounds like the right idea. But I'm still not sure this isn't too
compli
On Mon, 15 Jul 2002, Ryan Bloom wrote:
> Ok, cool. I thought I had forgotten something insanely simple about
> basic math. Phew. :-D
Programming, calculus, theory of computation, that stuff is
easy. Arithmetic and algebra, that's hard. =-]
You know you're in trouble when you can't even ma
> On Mon, 15 Jul 2002, Cliff Woolley wrote:
>
> > realsecs = 22/21 * (realusecs >> 20) + 22/21;
> > realsecs = 44/21 * (realusecs >> 20);
>
> I'm obviously on crack. :)
>
> That last line is of course totally wrong and shouldn't have been
there.
> :)
Ok, cool. I thought I had forgotten someth
On Mon, 15 Jul 2002, Cliff Woolley wrote:
> realsecs = 22/21 * (realusecs >> 20) + 22/21;
> realsecs = 44/21 * (realusecs >> 20);
I'm obviously on crack. :)
That last line is of course totally wrong and shouldn't have been there.
:)
On Mon, 15 Jul 2002, Bill Stoddard wrote:
> Cliff demonstrates that there is significant loss of accuracy using just a
> shift. I believe (faith? :-) that there is a simple solution to this. Don't
> know what it is just now though...
More numerical musings:
Without compensation:
1-21 seconds co
On Mon, 15 Jul 2002, Brian Pane wrote:
> (seconds << 20) + microseconds
Yeah, but addition and subtraction of the resulting scalars would require
just as many carry/underflow checks as a structure would...
I mean, that's all that really is: a bitfield.
--Cliff
Bill Stoddard wrote:
Cliff demonstrates that there is significant loss of accuracy using just a
shift. I believe (faith? :-) that there is a simple solution to this. Don't
know what it is just now though...
I think the simple solution is to pack the data into the apr_time_t
using
(seconds << 20)
On Mon, 15 Jul 2002, Bill Stoddard wrote:
> > >>3. 64-bit shifts to get approximate seconds
> > >> (fast, but loss of accuracy)
> > >>
> > >>
> > >
> > >If you convert from microseconds to integer seconds (which is what httpd
> > >requires), you loose -resolution- no matter how you do it
> >>3. 64-bit shifts to get approximate seconds
> >> (fast, but loss of accuracy)
> >>
> >>
> >
> >If you convert from microseconds to integer seconds (which is what httpd
> >requires), you loose -resolution- no matter how you do it. If
> the accuracy
> >you loose is smaller than the reso
Bill Stoddard wrote:
Under this proposal, the sequence of time operations in an
httpd request would look like:
1. gettimeofday
(fast, no loss of accuracy)
We cannot avoid this, right?
Right (but it's fast enough that we don't need to worry about it).
2. 64-bit multiplication to bui
> On Mon, 15 Jul 2002, Bill Stoddard wrote:
>
> > > 1. gettimeofday
> > >(fast, no loss of accuracy)
> > We cannot avoid this, right?
>
> Right.
>
> >
> > > 2. 64-bit multiplication to build an apr_time_t
> > >(slow (on lots of current platforms), no loss of accuracy)
> >
>
On Mon, 15 Jul 2002, Bill Stoddard wrote:
> > 1. gettimeofday
> >(fast, no loss of accuracy)
> We cannot avoid this, right?
Right.
>
> > 2. 64-bit multiplication to build an apr_time_t
> >(slow (on lots of current platforms), no loss of accuracy)
>
> Do we eliminate this
> Bill Stoddard wrote:
>
> >New proposal... leave apr_time_t exactly as it is. The
> performance problem
> >is with how we are converting an apr_time_t into a value with 1 second
> >resolution (ie, doing 64 bit divisions). I propose we introduce some new
> >macros (or functions) to efficiently re
+1 (on both :) )
David Reid wrote:
>
> +1 from me. Let's finish this nonsense!
>
> david
>
> > New proposal... leave apr_time_t exactly as it is. The performance
> problem
> > is with how we are converting an apr_time_t into a value with 1 second
> > resolution (ie, doing 64 bit divisions). I
Bill Stoddard wrote:
New proposal... leave apr_time_t exactly as it is. The performance problem
is with how we are converting an apr_time_t into a value with 1 second
resolution (ie, doing 64 bit divisions). I propose we introduce some new
macros (or functions) to efficiently remove resolution fr
> To: APR Dev List
> Subject: Re: Why not POSIX time_t?
>
> +1 from me. Let's finish this nonsense!
>
> david
>
> > New proposal... leave apr_time_t exactly as it is. The performance
> problem
> > is with how we are converting an apr_time_t into a va
+1 from me. Let's finish this nonsense!
david
> New proposal... leave apr_time_t exactly as it is. The performance
problem
> is with how we are converting an apr_time_t into a value with 1 second
> resolution (ie, doing 64 bit divisions). I propose we introduce some new
> macros (or functions)
New proposal... leave apr_time_t exactly as it is. The performance problem
is with how we are converting an apr_time_t into a value with 1 second
resolution (ie, doing 64 bit divisions). I propose we introduce some new
macros (or functions) to efficiently remove resolution from apr_time_t and
do
> > >>If you mean using only second-resolution times, that's an option,
> > >>but not for the httpd. One of the big problems with 1.3 was that
> > >>the request time was only stored with 1-second resolution. We used
> > >>to have to add in custom, redundant time lookups to get better
> > >>resol
> >>If you mean using only second-resolution times, that's an option,
> >>but not for the httpd. One of the big problems with 1.3 was that
> >>the request time was only stored with 1-second resolution. We used
> >>to have to add in custom, redundant time lookups to get better
> >>resolution whene
[EMAIL PROTECTED] wrote:
On Sun, 14 Jul 2002, Brian Pane wrote:
Ryan Bloom wrote:
BTW, this whole conversation started because we wanted to speed up
Apache. Has anybody considered taking a completely different tack to
solve this problem?
If you mean using only second-resolution t
Justin Erenkrantz wrote:
On Sun, Jul 14, 2002 at 07:42:09PM -0700, Brian Pane wrote:
It's a little more efficient if you put the result in
a struct rather than a scalar, but you still have to do
the carry from the seconds field to the microseconds field
if time1.tv_usec < time2.tv_usec. Minimal
On Sun, 14 Jul 2002, Brian Pane wrote:
> Ryan Bloom wrote:
>
> >BTW, this whole conversation started because we wanted to speed up
> >Apache. Has anybody considered taking a completely different tack to
> >solve this problem?
> >
>
> If you mean using only second-resolution times, that's an o
Ryan Bloom wrote:
BTW, this whole conversation started because we wanted to speed up
Apache. Has anybody considered taking a completely different tack to
solve this problem?
If you mean using only second-resolution times, that's an option,
but not for the httpd. One of the big problems with 1.
At 10:56 PM 7/14/2002, Justin Erenkrantz wrote:
On Sun, Jul 14, 2002 at 07:42:09PM -0700, Brian Pane wrote:
> It's a little more efficient if you put the result in
> a struct rather than a scalar, but you still have to do
> the carry from the seconds field to the microseconds field
> if time1.tv_us
On Sun, Jul 14, 2002 at 07:42:09PM -0700, Brian Pane wrote:
> It's a little more efficient if you put the result in
> a struct rather than a scalar, but you still have to do
> the carry from the seconds field to the microseconds field
> if time1.tv_usec < time2.tv_usec. Minimally, subtraction
> of
Aaron Bannert wrote:
The reason we don't use a struct (timeval or any variant thereof)
is that doing addition and subtraction on the struct is much slower,
more complicated, and (if people try to do their own match on the
struct directly) more error-prone than doing the same ops on a
scalar.
H
For clarity, there is only one reason that we aren't just using POSIX's
time_t. While Windows has time_t, it doesn't use time_t's internally.
Instead, it uses a completely different epoch with 100 nano-second
resolution.
The only other reason for apr_time_t is to get usec resolution.
> From: Aa
At 09:35 PM 7/14/2002, Aaron Bannert wrote:
On Sun, Jul 14, 2002 at 07:27:04PM -0700, Brian Pane wrote:
How exactly is the subtraction slower?I'm not at all sure what you
mean by people matching on the struct directly...
For addition...
rtm.sec = tm1.sec + tm2.sec;
rtm.usec = tm1.usec + tm2.usec;
i
On Sun, Jul 14, 2002 at 07:27:04PM -0700, Brian Pane wrote:
> You're thinking of timeval; time_t is just a long int containing
> seconds since the start of the epoch.
Yes, thanks. :)
> The reason we don't use a struct (timeval or any variant thereof)
> is that doing addition and subtraction on th
Aaron Bannert wrote:
Can someone remind me what the reasons are that we don't use a struct
with separate elements for seconds and microseconds, ala time_t?
You're thinking of timeval; time_t is just a long int containing
seconds since the start of the epoch.
The reason we don't use a struct (tim
41 matches
Mail list logo