Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

Quoting Brian Pane <[EMAIL PROTECTED]>:

> However, if you can do the byte counting in a request-level filter
> (right after the HTTP output filter?), that should enable you to
> bypass the problem.

Counting in the request based filter is easy, but there are a few problems:

- request based input filters can't see the headers, only bodies
- request based input and output filters aren't taking into account connection
protocols like SSL

The above then causes all number to be wrong.

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Brian Pane

On Tue, 2002-09-17 at 21:46, Bojan Smojver wrote:
> Quoting [EMAIL PROTECTED]:
> 
> > The problem is that the filters aren't HTTP specific.  If you make this
> > change, then the filters will be written to take advantage of the f->r
> > field in connection level filters.  Since that field doesn't make sense in
> > non-HTTP requests, modifying the filter_rec so that it is there is a
> > mistake.
> > 
> > Think about it this way.  If you add the request_rec to the
> > core_output_filter, and that filter is modified to use it, then all
> > protocol modules must ensure that the request_rec is added to the filter,
> > even if it doesn't make sense.
> > 
> > If you make it optional to put the request_rec in the filter_rec, then you
> > will be making it much harder to write filters.
> 
> OK, I think I understand now what you're trying to say. Things can go wrong if
> important fields can be NULL when you want them to be non-NULL and vice versa. I
> reckon that good documentation on the matter would help developers understand
> that they cannot depend on f->r being set for some protocols, which would
> eliminate confusion. But, this is the first ever filter that I'm writing and
> it's awfully simple, so I could be dead wrong.

The real problem is that "r" is HTTP-specific.  If we were designing
the server from scratch, I suppose we could introduce a protocol
independent "generic request structure" for use in any protocol that
had some sort of discrete request, and make that accessible within
the connection-level filters.  But for historical reasons, the
request_rec is HTTP-specific.

However, if you can do the byte counting in a request-level filter
(right after the HTTP output filter?), that should enable you to
bypass the problem.

Brian






Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

Quoting [EMAIL PROTECTED]:

> The problem is that the filters aren't HTTP specific.  If you make this
> change, then the filters will be written to take advantage of the f->r
> field in connection level filters.  Since that field doesn't make sense in
> non-HTTP requests, modifying the filter_rec so that it is there is a
> mistake.
> 
> Think about it this way.  If you add the request_rec to the
> core_output_filter, and that filter is modified to use it, then all
> protocol modules must ensure that the request_rec is added to the filter,
> even if it doesn't make sense.
> 
> If you make it optional to put the request_rec in the filter_rec, then you
> will be making it much harder to write filters.

OK, I think I understand now what you're trying to say. Things can go wrong if
important fields can be NULL when you want them to be non-NULL and vice versa. I
reckon that good documentation on the matter would help developers understand
that they cannot depend on f->r being set for some protocols, which would
eliminate confusion. But, this is the first ever filter that I'm writing and
it's awfully simple, so I could be dead wrong.

Let me go back to the core of the problem - how do I store the number of bytes
that were sent/received on the wire into the request_rec structure in order to
log that? After discussion with Justin, it seemed that connection based filters
were the place to do it because request based filters either don't see the input
headers or they don't take into account things like SSL. I mean, logging
something like that should be a fairly trivial exercise and yet it seems to be
creating major dramas...

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread rbb

On Wed, 18 Sep 2002, Bojan Smojver wrote:

> I understand. But isn't the ap_read_request HTTP specific given it's creating
> the request and all? In other words, if protocol is HTTP, we let connection
> filter know about the request. If not, we don't. That to me sounds exactly the
> same as your suggestion.
> 
> I must be missing something...

The problem is that the filters aren't HTTP specific.  If you make this
change, then the filters will be written to take advantage of the f->r
field in connection level filters.  Since that field doesn't make sense in
non-HTTP requests, modifying the filter_rec so that it is there is a
mistake.

Think about it this way.  If you add the request_rec to the
core_output_filter, and that filter is modified to use it, then all
protocol modules must ensure that the request_rec is added to the filter,
even if it doesn't make sense.

If you make it optional to put the request_rec in the filter_rec, then you
will be making it much harder to write filters.

Ryan


> 
> Bojan
> 
> Quoting [EMAIL PROTECTED]:
> 
> > You don't want to do this.  A connection based filter is connection
> > oriented.  By definition, it has no concept of a request.  While this
> > might make sense for HTTP, other protocol modules will have a much harder
> > time with this change.
> > 
> > Ryan
> 
> -
> This mail sent through IMP: http://horde.org/imp/
> 

-- 

___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

Quoting Ryan Morgan <[EMAIL PROTECTED]>:

> I would tend to agree with Ryan here, it should not be assumed that the
> request_rec will be available in the core_output_filter.

I wasn't planning on making any changes to core_output_filter that would make it
depend on f->r being set. The whole thing would be happening in a separate
module (mod_logio.c), which would either act (if f->r is set) or leave it alone
(if f->r is NULL). So, HTTP requests, for which f->r would be set even in the
connection based filters would be dealt with, anything else would be ignored by
the module.

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

I understand. But isn't the ap_read_request HTTP specific given it's creating
the request and all? In other words, if protocol is HTTP, we let connection
filter know about the request. If not, we don't. That to me sounds exactly the
same as your suggestion.

I must be missing something...

Bojan

Quoting [EMAIL PROTECTED]:

> You don't want to do this.  A connection based filter is connection
> oriented.  By definition, it has no concept of a request.  While this
> might make sense for HTTP, other protocol modules will have a much harder
> time with this change.
> 
> Ryan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Ryan Morgan


I would tend to agree with Ryan here, it should not be assumed that the
request_rec will be available in the core_output_filter.  One possible
solution would be to have c->bytes_in and c->bytes_out, then have identical
fields in the request_rec that would be updated upon the completion of the
request.  (counting only the bytes sent during *that* request)  I gave this 
a try a while back, but ran into problems because the request processing 
can bail out and run ap_log_transaction whenever it wants. (on 404's, etc).  
The request processing in apache is not symmetrical enough for that approach.

Then there's the issue of counting the bytes in the set-aside buckets
from buffered output (like Brian mentioned), but thats another problem
altogether.

-Ryan

On Tue, Sep 17, 2002 at 11:22:03PM -0400, [EMAIL PROTECTED] wrote:
> 
> You don't want to do this.  A connection based filter is connection
> oriented.  By definition, it has no concept of a request.  While this
> might make sense for HTTP, other protocol modules will have a much harder
> time with this change.
> 
> Ryan
> 
> On 18 Sep 2002, Bojan Smojver wrote:
> 
> > Justin,
> > 
> > After mucking around a bit with what you suggested, this seemed like the
> > straightforward thing to do. Not sure how things would work on internal
> > redirects etc. Can you please have a look. I'm sure it's naive, but so
> > am I (for now :-)
> > 
> > Bojan
> > 
> 
> -- 
> 
> ___
> Ryan Bloom[EMAIL PROTECTED]
> 550 Jean St
> Oakland CA 94610
> ---
> 
> 



Re: [PROPOSAL]: Add two fields to request_rec

2002-09-17 Thread Bojan Smojver

Quoting Brian Pane <[EMAIL PROTECTED]>:

> The one problem I see is that, because the response data might
> not be written until after the request has been logged--in the
> case of a keep-alive request--the bytes_out field will need to
> be updated *before* setting aside any output in core_output_filter().
> I.e., output bytes should be counted when they're passed to the
> core out filter, not when they're actually written.  That's
> somewhat complicated, because the core output filter code is
> so convoluted.

The way mod_logio works at the moment (on my box) is at the connection level
(i.e. AP_FTYPE_CONNECTION), which should run before core_output_filter (i.e
AP_FTYPE_NETWORK). The counting is done by a single call to apr_brigade_length,
which does not seem all that heavy, especially given that I call with with
read_all set to 0. So, I think we should be OK there, unless I didn't understand
the order of output filters properly.

> But if you can make that work, then we probably can also stop
> computing r->bytes_sent in the content-length filter on streamed
> requests, which will save one more scan through the brigade on
> each request.

Huh, don't know enough (yet :-) to contribute anything useful there.

> Fortunately, this only requires an increment for the minor
> number.  (Put the new fields at the end of the structure to
> maintain backward compatibility.)

Yep, by all means.

> So, +1 on the proposal.

Thanks.

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread rbb


You don't want to do this.  A connection based filter is connection
oriented.  By definition, it has no concept of a request.  While this
might make sense for HTTP, other protocol modules will have a much harder
time with this change.

Ryan

On 18 Sep 2002, Bojan Smojver wrote:

> Justin,
> 
> After mucking around a bit with what you suggested, this seemed like the
> straightforward thing to do. Not sure how things would work on internal
> redirects etc. Can you please have a look. I'm sure it's naive, but so
> am I (for now :-)
> 
> Bojan
> 

-- 

___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




Re: [PROPOSAL]: Add two fields to request_rec

2002-09-17 Thread Brian Pane

Bojan Smojver wrote:

>I'm proposing to add two extra fields to request_rec structure:
>
>apr_off_t bytes_in
>apr_off_t bytes_out
>
>These babies would be used to record the number of bytes received and sent (on
>the  wire) per each request. I'm writing a little filter module (with a lot of
>help from Justin :-), to do the actual counting. The logging of these numbers
>would require a patch to mod_log_config.c, which is already done. There would
>also be some other changes required throughout Apache 2.0, mainly to do with
>making sure that connection filter have a valid value of f->r.
>

The one problem I see is that, because the response data might
not be written until after the request has been logged--in the
case of a keep-alive request--the bytes_out field will need to
be updated *before* setting aside any output in core_output_filter().
I.e., output bytes should be counted when they're passed to the
core out filter, not when they're actually written.  That's
somewhat complicated, because the core output filter code is
so convoluted.

But if you can make that work, then we probably can also stop
computing r->bytes_sent in the content-length filter on streamed
requests, which will save one more scan through the brigade on
each request.

>I wanted to find out if there would be enough support for such a change as it
>makes a significant difference to the code of the module.
>
>The unfortunate side effect would be yet another MMN bump.
>

Fortunately, this only requires an increment for the minor
number.  (Put the new fields at the end of the structure to
maintain backward compatibility.)

So, +1 on the proposal.

Brian





[PROPOSAL]: Add two fields to request_rec

2002-09-17 Thread Bojan Smojver

I'm proposing to add two extra fields to request_rec structure:

apr_off_t bytes_in
apr_off_t bytes_out

These babies would be used to record the number of bytes received and sent (on
the  wire) per each request. I'm writing a little filter module (with a lot of
help from Justin :-), to do the actual counting. The logging of these numbers
would require a patch to mod_log_config.c, which is already done. There would
also be some other changes required throughout Apache 2.0, mainly to do with
making sure that connection filter have a valid value of f->r.

I wanted to find out if there would be enough support for such a change as it
makes a significant difference to the code of the module.

The unfortunate side effect would be yet another MMN bump.

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

Quoting Justin Erenkrantz <[EMAIL PROTECTED]>:

> Well, I'd do (plus the declaration of f in your later post):
> 
> for (f = conn->input_filters; f; f = f->next) {
>f->r = r;
> }
> 
> for (f = conn->output_filters; f; f = f->next) {
>f->r = r;
> }

What was that book again... K&R, The C Programming Language. Maybe I should find
out where it went :-)

I have changed the code of mod_logio.c significantly in the past few hours,
throwing out the request based filters altogether and just keeping the
connection based filters, now that f->r is not NULL any more. The whole thing
turned out to be much simpler and very accurate on simple requests. I'm yet to
do heavy testing, but I reckon even things like SSL should be covered.

> Now, the question is whether anyone has any major qualms with this.
> Basically, it now allows connection filters some access to the
> current request_rec (as defined by the protocol module).  
> 
> Internal redirects would be handled by the update_r_in_filters func
> in http_request.c (which is equivalent to this for statement - hmm,
> I forsee ap_update_filters in our future), so that should be fine.

I'm not sure that's the case. It only updates content and protocol filters (i.e.
the ones tied to the request_rec), but it doesn't touch connection filters. I
reckon we should do the above for internal redirects as well. That is, if I
really understand what internal redirects are in the first place :-)

> The only potential problem I forsee is whether the input bytes could
> be logged on one request, and the output on another.  Therefore, we
> might want to walk r->prev and add the totals to get our real bytes
> read and sent.  Oooh, internal_internal_redirect copies the
> r->read_length to the new request, so I think we're fine.

I'll try to test that by using some funcionality that depends on
internal_internal_redirect. From what I see in internal_internal_redirect, the
new->notes gets created afresh, so whatever I put in r->notes is going to get
lost. Big dramas! Looks like we'll *have to* introduce r->bytes_in and
r->bytes_out if we want this to work at all...

This would, of course, require yet another MMN bump... And it would require
revisiting all of the code that deals with copying request (e.g.
internal_internal_redirect) and then making sure that those fields are copied as
well. It also might make a lot of people upset... :-( On the other hand, it's
going to make mod_logio.c almost trivial and probably faster.

> For the logging case, I think this makes a lot of sense as it lets
> us simplify how we log.

I can verify that it is almost impossible to log the correct number of output
bytes (on the wire but per request) unless the requests are tied to connection
filters. Input bytes are different, because request based filters run after
connection based filters and we can always do what I was doing before (i.e.
store in c->notes and then copy into r->whatever). However, request based output
filters are already out of the way when connection based filters take over and
if f->r is NULL, it is not possible to store the number of bytes into the
request. So, the above patch and other neccessary additions would be essential
to achieve this.

Bojan

-
This mail sent through IMP: http://horde.org/imp/



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Justin Erenkrantz

On Wed, Sep 18, 2002 at 07:36:00AM +1000, Bojan Smojver wrote:
> Justin,
> 
> After mucking around a bit with what you suggested, this seemed like the
> straightforward thing to do. Not sure how things would work on internal
> redirects etc. Can you please have a look. I'm sure it's naive, but so
> am I (for now :-)
> 
> Bojan

> --- httpd-2.0-vanilla/server/protocol.c   Fri Sep  6 16:01:19 2002
> +++ httpd-2.0/server/protocol.c   Wed Sep 18 07:32:37 2002
> @@ -895,6 +895,15 @@
>  r->connection  = conn;
>  r->server  = conn->base_server;
>  
> +/* Update the request in connection filters */
> +f = conn->input_filters;
> +for (; f; f = f->next)
> +f->r = r;
> +
> +f = conn->output_filters;
> +for (; f; f = f->next)
> +f->r = r;
> +

Well, I'd do (plus the declaration of f in your later post):

for (f = conn->input_filters; f; f = f->next) {
   f->r = r;
}

for (f = conn->output_filters; f; f = f->next) {
   f->r = r;
}

Now, the question is whether anyone has any major qualms with this.
Basically, it now allows connection filters some access to the
current request_rec (as defined by the protocol module).  

Internal redirects would be handled by the update_r_in_filters func
in http_request.c (which is equivalent to this for statement - hmm,
I forsee ap_update_filters in our future), so that should be fine.

The only potential problem I forsee is whether the input bytes could
be logged on one request, and the output on another.  Therefore, we
might want to walk r->prev and add the totals to get our real bytes
read and sent.  Oooh, internal_internal_redirect copies the
r->read_length to the new request, so I think we're fine.

For the logging case, I think this makes a lot of sense as it lets
us simplify how we log.  -- justin



Re: [PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

OOPS, forgot the variable itself...

Bojan

On Wed, 2002-09-18 at 07:36, Bojan Smojver wrote:
> Justin,
> 
> After mucking around a bit with what you suggested, this seemed like the
> straightforward thing to do. Not sure how things would work on internal
> redirects etc. Can you please have a look. I'm sure it's naive, but so
> am I (for now :-)
> 
> Bojan


--- httpd-2.0-vanilla/server/protocol.c Fri Sep  6 16:01:19 2002
+++ httpd-2.0/server/protocol.c Wed Sep 18 07:38:37 2002
@@ -888,6 +888,7 @@
 const char *expect;
 int access_status;
 apr_bucket_brigade *tmp_bb;
+ap_filter_t *f;
 
 apr_pool_create(&p, conn->pool);
 r = apr_pcalloc(p, sizeof(request_rec));
@@ -895,6 +896,15 @@
 r->connection  = conn;
 r->server  = conn->base_server;
 
+/* Update the request in connection filters */
+f = conn->input_filters;
+for (; f; f = f->next)
+f->r = r;
+
+f = conn->output_filters;
+for (; f; f = f->next)
+f->r = r;
+
 r->user= NULL;
 r->ap_auth_type= NULL;
 



[PATCH]: Update request in connection based filters

2002-09-17 Thread Bojan Smojver

Justin,

After mucking around a bit with what you suggested, this seemed like the
straightforward thing to do. Not sure how things would work on internal
redirects etc. Can you please have a look. I'm sure it's naive, but so
am I (for now :-)

Bojan


--- httpd-2.0-vanilla/server/protocol.c Fri Sep  6 16:01:19 2002
+++ httpd-2.0/server/protocol.c Wed Sep 18 07:32:37 2002
@@ -895,6 +895,15 @@
 r->connection  = conn;
 r->server  = conn->base_server;
 
+/* Update the request in connection filters */
+f = conn->input_filters;
+for (; f; f = f->next)
+f->r = r;
+
+f = conn->output_filters;
+for (; f; f = f->next)
+f->r = r;
+
 r->user= NULL;
 r->ap_auth_type= NULL;
 



Re: custom linux kernel builds

2002-09-17 Thread Brian Pane

[EMAIL PROTECTED] wrote:

>__pthread_alt_unlock() loops through a list of threads that are blocked on a
>mutex to find the thread with the highest priority. I don't know which mutex
>this is; I'm guessing it's the one associated with worker's condition variable. 
>The ironic thing is that for httpd all the threads have the same priority AFAIK,
>so these cycles don't do us much good.  I'm not aware of a patch to improve
>this, so I think our choices for scalability in the mean time are keeping
>ThreadsPerChild
>very low or prefork.
>

The leader/follower MPM might solve this problem.  It doesn't share one
big condition variable among all the idle workers; instead, each worker
has its own condition var.  If you have time to try the 2.0.41 version
of leader/follower in your test environment, I'd be really interested
in hearing whether it fixes the __pthread_alt_unlock() bottleneck.
(Note: It needs to be built with "./configure 
--enable-nonportable-atomics=yes"
for best results on x86.)

Brian





RE: Tagged and rolled 2.0.41

2002-09-17 Thread Sander Striker

> From: William A. Rowe, Jr. [mailto:[EMAIL PROTECTED]]
> Sent: 17 September 2002 20:57

> >*sigh*  Ofcourse you are right.  So, what do we do, stick with 2.0.41 or retag
> >APACHE_2_0_42 to be the same as APACHE_2_0_41 and reroll?
> 
> Don't be silly, this is an alpha candidate.
> 
> Before it moves to www.apache.org/dist/httpd/ fix it.  Until then, it
> matters not.

Thanks for the feedback.  This RM adds another point to his list of things
to remember  (while fixing the tarballs).

Sander




Re: Moving to 2.0.42? WAS: RE: Tagged and rolled 2.0.41

2002-09-17 Thread Greg Stein

On Tue, Sep 17, 2002 at 09:09:38PM +0200, Sander Striker wrote:
>...
> > > Just on the basic premise that the tarball has been released.  At this
> > > point, it is available for users.  If we are going to create new tarballs,
> > > then must have a new name.

Nope. The contents are the same. We're just fixing a process issue, not a
code issue.

> > *sigh*  Ofcourse you are right.  So, what do we do, stick with 2.0.41 or retag
> > APACHE_2_0_42 to be the same as APACHE_2_0_41 and reroll?
> 
> Ok, maybe this is all a bit too much for a _timestamp_.  There were no
> content changes.  So, if someone would be using 2.0.41 (before or after the
> timestamp tweak), the code would be the same.

Exactly. The rules for creating new tarballs exist so that we can know
*precisely* what code somebody is using when they say "2.0.41". If there
were three tarballs with three different bodies of code, then we'd be hosed.
And that is what the rules are about.

In this case, we aren't changing anything but a timestamp to help people.

> Since we haven't released yet
> I would like to simply replace the existing tarballs and sigs.

You're the RM. If you believe this will result in a good release, then go
for it.

>From my standpoint: +1 of course.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



RE: Tagged and rolled 2.0.41

2002-09-17 Thread William A. Rowe, Jr.

At 01:59 PM 9/17/2002, Sander Striker wrote:
> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> > Sent: 17 September 2002 20:44
>
> >>> I would also recommend a new tarball with the timestamp tweaked. 
> Something
> >>> like so:
> >>>
> >>> $ tar xzf httpd-tar.gz
> >>> $ touch .../ssl_expr_parse.c
> >>> $ tar czf httpd-tar.gz httpd-...
> >>>
> >>> That's gonna affect the tarball's MD5 signature tho.
> >>
> >> And the PGP signatures.  Do I hear objections against that?
> >
> > Just on the basic premise that the tarball has been released.  At this
> > point, it is available for users.  If we are going to create new tarballs,
> > then must have a new name.
>
>*sigh*  Ofcourse you are right.  So, what do we do, stick with 2.0.41 or retag
>APACHE_2_0_42 to be the same as APACHE_2_0_41 and reroll?

Don't be silly, this is an alpha candidate.

Before it moves to www.apache.org/dist/httpd/ fix it.  Until then, it
matters not.

Bill





Moving to 2.0.42? WAS: RE: Tagged and rolled 2.0.41

2002-09-17 Thread Sander Striker

> From: Sander Striker [mailto:[EMAIL PROTECTED]]
> Sent: 17 September 2002 20:59

> > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> > Sent: 17 September 2002 20:44
> 
> >>> I would also recommend a new tarball with the timestamp tweaked. Something
> >>> like so:
> >>> 
> >>> $ tar xzf httpd-tar.gz
> >>> $ touch .../ssl_expr_parse.c
> >>> $ tar czf httpd-tar.gz httpd-...
> >>> 
> >>> That's gonna affect the tarball's MD5 signature tho.
> >> 
> >> And the PGP signatures.  Do I hear objections against that?
> > 
> > Just on the basic premise that the tarball has been released.  At this
> > point, it is available for users.  If we are going to create new tarballs,
> > then must have a new name.
> 
> *sigh*  Ofcourse you are right.  So, what do we do, stick with 2.0.41 or retag
> APACHE_2_0_42 to be the same as APACHE_2_0_41 and reroll?

Ok, maybe this is all a bit too much for a _timestamp_.  There were no
content changes.  So, if someone would be using 2.0.41 (before or after the
timestamp tweak), the code would be the same.  Since we haven't released yet
I would like to simply replace the existing tarballs and sigs.

Sander




RE: Tagged and rolled 2.0.41

2002-09-17 Thread Sander Striker

> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]
> Sent: 17 September 2002 20:44

>>> I would also recommend a new tarball with the timestamp tweaked. Something
>>> like so:
>>> 
>>> $ tar xzf httpd-tar.gz
>>> $ touch .../ssl_expr_parse.c
>>> $ tar czf httpd-tar.gz httpd-...
>>> 
>>> That's gonna affect the tarball's MD5 signature tho.
>> 
>> And the PGP signatures.  Do I hear objections against that?
> 
> Just on the basic premise that the tarball has been released.  At this
> point, it is available for users.  If we are going to create new tarballs,
> then must have a new name.

*sigh*  Ofcourse you are right.  So, what do we do, stick with 2.0.41 or retag
APACHE_2_0_42 to be the same as APACHE_2_0_41 and reroll?

Sander



RE: Tagged and rolled 2.0.41

2002-09-17 Thread rbb


> > I would also recommend a new tarball with the timestamp tweaked. Something
> > like so:
> > 
> > $ tar xzf httpd-tar.gz
> > $ touch .../ssl_expr_parse.c
> > $ tar czf httpd-tar.gz httpd-...
> > 
> > That's gonna affect the tarball's MD5 signature tho.
> 
> And the PGP signatures.  Do I hear objections against that?

Just on the basic premise that the tarball has been released.  At this
point, it is available for users.  If we are going to create new tarballs,
then must have a new name.

Ryan

___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




Re: auth stuff still broken

2002-09-17 Thread rbb

On Tue, 17 Sep 2002, Greg Stein wrote:

> On Tue, Sep 17, 2002 at 10:26:02AM -0700, Aaron Bannert wrote:
> > On Tue, Sep 17, 2002 at 01:00:44PM -0400, Ryan Bloom wrote:
> > > > Does that make any sense?  I'm certain you will have users misconfigure
> > > > the 'backstop' modules (_default flavors) resulting in insecure servers.
> > > > If the 'backstop' _default auth handlers are always loaded as part of the
> > > > core mod_auth, users will have far fewer problems.
> > > 
> > > I almost like this, but I wouldn't put it mod_auth.  I would put them in
> > > the core.  The core server has always been the location for our default
> > > functions.
> > 
> > +1 for the core, or at least a module that's always statically compiled
> > (which is easy to do with the .m4 macros we have).
> 
> Well... as long as "core" means modules/http/.
> 
> But since our running of auth hooks comes from server/, then this stuff
> could prolly go there as well. IMO, it sucks that our "core" server knows
> about HTTP authentication and authorization.
> 
> In general: sure, this stuff makes some sense in the core rather than
> default modules.

It should be in server, please.  Remember that many protocol modules use
those same hooks to do their authentication.  Perhaps this re-org should
also try to abstract that stuff out.  But, regardless of whether the
HTTP-part of the authenticatio is abstracted, please leave the hooks in
the /server directory.  At some point, it would be really nice to be able
to build Apache without the /http module.

Ryan

___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




RE: Tagged and rolled 2.0.41

2002-09-17 Thread Sander Striker

> From: Greg Stein [mailto:[EMAIL PROTECTED]]
> Sent: 17 September 2002 20:31

> On Tue, Sep 17, 2002 at 12:56:21PM +0200, Sander Striker wrote:
> > > From: Mads Toftum [mailto:[EMAIL PROTECTED]]
> > > Sent: 17 September 2002 12:38
> > 
> > > On Tue, Sep 17, 2002 at 03:41:15AM +0200, Sander Striker wrote:
> > > > Hi,
> > > > 
> > > > I've tagged and rolled 2.0.41.  Please test the tarballs found
> > > > at http://httpd.apache.org/dev/dist/ and cast thy votes.
> > > > 
> > > One very small issue when building (tested on solaris 8 sparc) with
> > > ssl_expr_parse.y in modules/ssl. Given that the timestamp on that file
> > > is newer than on the output files, then make wants to run them through
> > > yacc again. Looking at the timestamps it seems that ssl_expr_parse.y
> > > has been updated by buildconf. Updating the timestamp on the output
> > > files makes apache build cleanly without yacc.
> > 
> > Grmpf.  Should we modify httpd_roll_release to touch the output files?
> > 
> > Samder
> 
> Hey "Samder"... Yes, +1 on tweaking httpd_roll_release to touch the files.

Bah, I can't even spell my own name ;)
 
> I would also recommend a new tarball with the timestamp tweaked. Something
> like so:
> 
> $ tar xzf httpd-tar.gz
> $ touch .../ssl_expr_parse.c
> $ tar czf httpd-tar.gz httpd-...
> 
> That's gonna affect the tarball's MD5 signature tho.

And the PGP signatures.  Do I hear objections against that?

Sander




Re: Tagged and rolled 2.0.41

2002-09-17 Thread Greg Stein

On Tue, Sep 17, 2002 at 12:56:21PM +0200, Sander Striker wrote:
> > From: Mads Toftum [mailto:[EMAIL PROTECTED]]
> > Sent: 17 September 2002 12:38
> 
> > On Tue, Sep 17, 2002 at 03:41:15AM +0200, Sander Striker wrote:
> > > Hi,
> > > 
> > > I've tagged and rolled 2.0.41.  Please test the tarballs found
> > > at http://httpd.apache.org/dev/dist/ and cast thy votes.
> > > 
> > One very small issue when building (tested on solaris 8 sparc) with
> > ssl_expr_parse.y in modules/ssl. Given that the timestamp on that file
> > is newer than on the output files, then make wants to run them through
> > yacc again. Looking at the timestamps it seems that ssl_expr_parse.y
> > has been updated by buildconf. Updating the timestamp on the output
> > files makes apache build cleanly without yacc.
> 
> Grmpf.  Should we modify httpd_roll_release to touch the output files?
> 
> Samder

Hey "Samder"... Yes, +1 on tweaking httpd_roll_release to touch the files.

I would also recommend a new tarball with the timestamp tweaked. Something
like so:

$ tar xzf httpd-tar.gz
$ touch .../ssl_expr_parse.c
$ tar czf httpd-tar.gz httpd-...

That's gonna affect the tarball's MD5 signature tho.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: auth stuff still broken

2002-09-17 Thread John K. Sterling


>-- Original Message --
>Date: Tue, 17 Sep 2002 11:19:53 -0700
>From: Greg Stein <[EMAIL PROTECTED]>
>Subject: Re: auth stuff still broken
>
>
>But since our running of auth hooks comes from server/, then this stuff
>could prolly go there as well. IMO, it sucks that our "core" server knows
>about HTTP authentication and authorization.

I think it can be don agnostic to protocol as a genereic provider interface
- like justin suggested.  that would solve all of these problems.

sterling




Re: auth stuff still broken

2002-09-17 Thread John K. Sterling


>-- Original Message --
>Date: Tue, 17 Sep 2002 10:26:02 -0700
>From: Aaron Bannert <[EMAIL PROTECTED]>
>Subject: Re: auth stuff still broken
>
>
>+1 for the core, or at least a module that's always statically compiled
>(which is easy to do with the .m4 macros we have).

Yup - I suppose if we do that they don't need to be optional functions either...
 But it would be nice to do what justin suggested:  namespace protect the
registration (e.g. register_provider("authn", "file", file_authn_module))
or whatever.  This would allow dav (and all other modules that have a similar
architecture) to reuse this code in their own namespace.

sterling




Re: auth stuff still broken

2002-09-17 Thread Greg Stein

On Tue, Sep 17, 2002 at 10:26:02AM -0700, Aaron Bannert wrote:
> On Tue, Sep 17, 2002 at 01:00:44PM -0400, Ryan Bloom wrote:
> > > Does that make any sense?  I'm certain you will have users misconfigure
> > > the 'backstop' modules (_default flavors) resulting in insecure servers.
> > > If the 'backstop' _default auth handlers are always loaded as part of the
> > > core mod_auth, users will have far fewer problems.
> > 
> > I almost like this, but I wouldn't put it mod_auth.  I would put them in
> > the core.  The core server has always been the location for our default
> > functions.
> 
> +1 for the core, or at least a module that's always statically compiled
> (which is easy to do with the .m4 macros we have).

Well... as long as "core" means modules/http/.

But since our running of auth hooks comes from server/, then this stuff
could prolly go there as well. IMO, it sucks that our "core" server knows
about HTTP authentication and authorization.

In general: sure, this stuff makes some sense in the core rather than
default modules.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: auth stuff still broken

2002-09-17 Thread Aaron Bannert

On Tue, Sep 17, 2002 at 01:00:44PM -0400, Ryan Bloom wrote:
> > Does that make any sense?  I'm certain you will have users misconfigure
> > the 'backstop' modules (_default flavors) resulting in insecure servers.
> > If the 'backstop' _default auth handlers are always loaded as part of the
> > core mod_auth, users will have far fewer problems.
> 
> I almost like this, but I wouldn't put it mod_auth.  I would put them in
> the core.  The core server has always been the location for our default
> functions.

+1 for the core, or at least a module that's always statically compiled
(which is easy to do with the .m4 macros we have).

-aaron



Re: auth stuff still broken

2002-09-17 Thread William A. Rowe, Jr.

At 12:04 PM 9/17/2002, Ian Holsman wrote:
>On Tue, 17 Sep 2002 10:00:44 -0700, rbb wrote:
> > On Tue, 17 Sep 2002, William A. Rowe, Jr. wrote:
> >> Mod_auth would further include all of the hooks, and be the common
> >> module that all other mod_auth_foo, authn and authz modules require.
> >>
> >> Does that make any sense?  I'm certain you will have users misconfigure
> >> the 'backstop' modules (_default flavors) resulting in insecure servers.
> >> If the 'backstop' _default auth handlers are always loaded as part of
> >> the core mod_auth, users will have far fewer problems.
> >
> > I almost like this, but I wouldn't put it mod_auth.  I would put them in
> > the core.  The core server has always been the location for our default
> > functions.
>
>why don't you do what we do in mod_core & mod_proxy
>just create a 'mod_auth' with the common functions and have mod_auth_XYZ
>just hook into that

That was what I was getting at.  A completely plugable and removable
auth architecture, just like dav, proxy and cache.

Bill





Re: auth stuff still broken

2002-09-17 Thread Justin Erenkrantz

On Tue, Sep 17, 2002 at 11:50:56AM -0500, William A. Rowe, Jr. wrote:
> I was thinking about this.  What about -eliminating- the mod_authn_default
> and mod_authz_default, merging them into mod_auth, and moving the
> directives from mod_auth_basic and mod_auth_digest into the common
> mod_auth.

Remember there is no mod_auth, and merging the mod_authn_default
and mod_authz_default into a new mod_auth would be contrary to the
entire split - we don't want any single module doing both functions
any more.  (Third-parties could, but we shouldn't.)

> Mod_auth would further include all of the hooks, and be the common
> module that all other mod_auth_foo, authn and authz modules require.
> 
> Does that make any sense?  I'm certain you will have users misconfigure
> the 'backstop' modules (_default flavors) resulting in insecure servers.
> If the 'backstop' _default auth handlers are always loaded as part of the
> core mod_auth, users will have far fewer problems.

Nah.  I don't think that's necessary.  In fact, I'm not really
sure what mod_authn_default should be doing - it's something that
Dirk's patch had - I don't really see why we need a 'backstopper'
module at all.  But, I left that code in there.

To me, it sounds like that could be a 'deny' provider - perhaps
it should be mod_authn_deny.

The use case could be:

AuthProvider file dbm deny

But, if you're going to do that, why not just set dbm to be
authoritative?  So, I'm not sold on keeping this there.  Well, we
could use that and toss the Authoritative directives.  Hmm...

FWIW, Sterling and I have outlined a solution that fixes the DSO
ordering problem by moving the provider API into the core.  It's
now waiting on someone to implement it.  =)  -- justin



Re: auth stuff still broken

2002-09-17 Thread Ian Holsman

On Tue, 17 Sep 2002 10:00:44 -0700, rbb wrote:

> On Tue, 17 Sep 2002, William A. Rowe, Jr. wrote:
> 
>> I was thinking about this.  What about -eliminating- the
>> mod_authn_default and mod_authz_default, merging them into mod_auth, and
>> moving the directives from mod_auth_basic and mod_auth_digest into the
>> common mod_auth.
>> 
>> Mod_auth would further include all of the hooks, and be the common
>> module that all other mod_auth_foo, authn and authz modules require.
>> 
>> Does that make any sense?  I'm certain you will have users misconfigure
>> the 'backstop' modules (_default flavors) resulting in insecure servers.
>> If the 'backstop' _default auth handlers are always loaded as part of
>> the core mod_auth, users will have far fewer problems.
> 
> I almost like this, but I wouldn't put it mod_auth.  I would put them in
> the core.  The core server has always been the location for our default
> functions.

why don't you do what we do in mod_core & mod_proxy
just create a 'mod_auth' with the common functions and have mod_auth_XYZ
just hook into that
> 
> Ryan
> 
> 
> ___
> Ryan Bloom[EMAIL PROTECTED] 550 Jean St
> Oakland CA 94610
> ---



Re: auth stuff still broken

2002-09-17 Thread rbb

On Tue, 17 Sep 2002, William A. Rowe, Jr. wrote:

> I was thinking about this.  What about -eliminating- the mod_authn_default
> and mod_authz_default, merging them into mod_auth, and moving the
> directives from mod_auth_basic and mod_auth_digest into the common
> mod_auth.
> 
> Mod_auth would further include all of the hooks, and be the common
> module that all other mod_auth_foo, authn and authz modules require.
> 
> Does that make any sense?  I'm certain you will have users misconfigure
> the 'backstop' modules (_default flavors) resulting in insecure servers.
> If the 'backstop' _default auth handlers are always loaded as part of the
> core mod_auth, users will have far fewer problems.

I almost like this, but I wouldn't put it mod_auth.  I would put them in
the core.  The core server has always been the location for our default
functions.

Ryan


___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




Re: auth stuff still broken

2002-09-17 Thread William A. Rowe, Jr.

I was thinking about this.  What about -eliminating- the mod_authn_default
and mod_authz_default, merging them into mod_auth, and moving the
directives from mod_auth_basic and mod_auth_digest into the common
mod_auth.

Mod_auth would further include all of the hooks, and be the common
module that all other mod_auth_foo, authn and authz modules require.

Does that make any sense?  I'm certain you will have users misconfigure
the 'backstop' modules (_default flavors) resulting in insecure servers.
If the 'backstop' _default auth handlers are always loaded as part of the
core mod_auth, users will have far fewer problems.

Bill

At 11:21 AM 9/17/2002, Doug MacEachern wrote:
>a fresh build/install of .42-dev:
>Cannot load
>/.../modules/mod_authn_file.so into server:
>/.../modules/mod_authn_file.so:
>undefined symbol: authn_register_provider
>
>stock httpd.conf is installed (by 'make install') with modules in this
>order:
>LoadModule authn_file_module modules/mod_authn_file.so
>LoadModule authn_dbm_module modules/mod_authn_dbm.so
>LoadModule authn_anon_module modules/mod_authn_anon.so
>LoadModule authn_default_module modules/mod_authn_default.so
>LoadModule authz_host_module modules/mod_authz_host.so
>LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
>LoadModule authz_user_module modules/mod_authz_user.so
>LoadModule authz_dbm_module modules/mod_authz_dbm.so
>LoadModule authz_default_module modules/mod_authz_default.so
>LoadModule auth_basic_module modules/mod_auth_basic.so
>LoadModule auth_digest_module modules/mod_auth_digest.so





auth stuff still broken

2002-09-17 Thread Doug MacEachern

a fresh build/install of .42-dev:
Cannot load 
/.../modules/mod_authn_file.so into server: 
/.../modules/mod_authn_file.so: 
undefined symbol: authn_register_provider

stock httpd.conf is installed (by 'make install') with modules in this 
order:
LoadModule authn_file_module modules/mod_authn_file.so
LoadModule authn_dbm_module modules/mod_authn_dbm.so
LoadModule authn_anon_module modules/mod_authn_anon.so
LoadModule authn_default_module modules/mod_authn_default.so
LoadModule authz_host_module modules/mod_authz_host.so
LoadModule authz_groupfile_module modules/mod_authz_groupfile.so
LoadModule authz_user_module modules/mod_authz_user.so
LoadModule authz_dbm_module modules/mod_authz_dbm.so
LoadModule authz_default_module modules/mod_authz_default.so
LoadModule auth_basic_module modules/mod_auth_basic.so
LoadModule auth_digest_module modules/mod_auth_digest.so





Re: custom linux kernel builds

2002-09-17 Thread gregames

Ian Holsman wrote:
> 
> Hi Greg,
> 
> we are about to start into the wild wild world of linux, and I was
> wondering if you have any hints on what patches you would go with for a
> custom kernel to get maximum performance.. stuff like ingo's O(1)
> scheduler and the like..

I'm glad you asked, since I've been looking at scalability issues with Linux
lately.  Sorry for the long post - hit "delete" if you aren't interested.

We did some Linux benchmarking in a configuration similar to a reverse proxy
that takes disk file I/O and directory_walk out of the picture.  We started with
the 2.0.40 worker MPM on Red Hat 7.2 with a 2.4.9 kernel on a 2 way SMP.  We
tried numerous combinations of ThreadsPerChild and process limits such that
there were always a constant 1000 worker threads active in the server, maxed out
the CPUs, and ran oprofile 0.3.  We got the best throughput at 200
threads/process.  Then I took the oprofile sample counts for every
binary/library that used over 1% of the CPU, broke those down by function, and
then scaled the results to the throughput.  The results should be proportional
to CPU cycles per request by function.  Here are the heavy hitters:

threads per process   binary/   function name  
   20   200   500 library
   --   ---   ---
230247 38334 66517   kernel schedule
  1742 31336 75763   libpthread __pthread_alt_unlock
  7447  6563  7341   libc   memcpy
  6661  6019  7109   libc   _IO_vfscanf
  5614  5388  5893   libc   strlen
 88825  5060 14296   kernel stext_lock
  1276  4043  6210   kernel do_select
  3994  3933  4239   kernel tcp_sendmsg
  2761  3917  4071   libc   chunk_alloc
  4285  3606  3829   libapr apr_palloc

disclaimer: the tests weren't rigidly controlled

The 5 and 6 digit numbers above are the most bothersome.  Bill S and Jeff told
me about Ingo Molnar's O(1) scheduler patch.  Reviewing the code before and
after this patch, I believe it will make a huge improvement in schedule()
cycles.  The older scheduler spends a lot of time looping thru the entire run
queue comparing "goodness" values in order to decide which task (process or
thread) is best to dispatch. That's gone with Ingo's O(1) patch.  It waits until
a task looses its time slice to recompute an internal priority value, and picks
the highest priority ready task using just a few instructions.  It turns out
that Red Hat 7.3 and Red Hat Advanced Server 2.1 already have this patch
included, so this one should be easy to solve.  dunno about other distros.

__pthread_alt_unlock() loops through a list of threads that are blocked on a
mutex to find the thread with the highest priority. I don't know which mutex
this is; I'm guessing it's the one associated with worker's condition variable. 
The ironic thing is that for httpd all the threads have the same priority AFAIK,
so these cycles don't do us much good.  I'm not aware of a patch to improve
this, so I think our choices for scalability in the mean time are keeping
ThreadsPerChild
very low or prefork.

The stext_lock() cycles mean that we're getting contention on some kernel
spinlock.  I don't know which lock yet.  The scheduler uses spinlocks on SMPs,
and the stext_lock cycles above sort of track the scheduler cycles so I'm hoping
that might be it.  There's a tool called lockmeter on SourceForge that can
provide spinlock usage statistics in case those cycles don't go away with the
O(1) scheduler patch.

Since we also had problems getting core dumps reliably with a threaded MPM in
addition to the pthread scalability issue, we decided to switch over to
prefork.  That gives better SPECWeb99 results at the moment too.  Then we
started hitting seg faults in pthread initialization code in child processes
during fork() when trying to start 2000 processes.  

It turns out that dmesg had tons of "ldt allocation failure" messages. 
linuxthreads uses the ldt on i386 to address its structure representing the
current thread.  Since apr is linked with threads enabled by default on Linux,
each child is assigned a 64K ldt out of kernel memory with only one entry for
thread 0 out of 8K used.  64K may not seem like much these days, but one one
machine we had 900M of RAM (according to free) and were trying for 10,000
concurrent connections, which works out to 90K of RAM per process with prefork.  

Configuring apr with --disable-threads makes the ldts a non-issue, but that
raises a concern for reliability of binaries when there are 3rd party modules
such as Vignette which are threaded.  It's pretty easy to patch the kernel to
reduce the size of the ldts from the i386 architected max of 8K entries each. 
That reduces that maximum number of threads per process (which might not be a
bad thing for httpd at the moment), and of course there will be a lot of users
unwilling to rebuild the Linux kernel.  With either --disable-threads in apr or
the ldts limited to 256 entries in the kernel, it's no problem starting 10,000
child processes.  You can also giv

Re: Preserving history in dev/dist?

2002-09-17 Thread Aaron Bannert

On Tue, Sep 17, 2002 at 12:50:59AM -0700, Justin Erenkrantz wrote:
> I vote 'delete' as these were never formally released, hence
> no need to keep them for posterity.  -- justin

I don't care if we delete them, since I don't at the moment see a
compelling reason to keep them around unless we decide we want to keep
all releases around.

-aaron

p.s. alphas, betas, and GAs are all releases. As soon as it goes up
on the website it's a release, and someone may have used it in their
product or on their system.




Re: mod_custom_log exits too late?

2002-09-17 Thread rbb

On 17 Sep 2002, Brian Pane wrote:

> On Mon, 2002-09-16 at 23:32, Justin Erenkrantz wrote:
> > On Mon, Sep 16, 2002 at 09:46:47AM -0700, Brian Pane wrote:
> > > I disagree entirely.  There's no need to let the API keep changing
> > > continuously, especially not for the sake of "correctness."  All of
> > > our competitors provide API stability.  And as a result, people who
> > > develop modules for, say, IIS or IPlanet don't need to worry about
> > > their code breaking with every maintenance release.
> > 
> > I think you're missing a huge point here.
> > 
> > You *cannot* tell IIS or iPlanet that their APIs suck.
> 
> Of course you can.  And if you're a big enough customer, your
> server vendor will take your comments seriously enough to improve
> APIs in the next major release.  What a responsible vendor will
> not do, however, is break all their other customers' systems in
> the next maintenance release just because one customer didn't like
> the function signatures.
> 
> > You probably
> > can't even fix problems in their servers.  You're stuck.
> > 
> > You *can* tell us that our APIs suck and provide patches on ways to
> > improve - especially for 2.0.
> 
> Sure.  And we have an obligation to users and third party developers
> to take constructive input and work it in to future releases in a
> manner that won't break people's systems.

I don't think we are going to end up agreeing.  I also don't think it
matters.  The reality is that you can't get to binary compat without doing
1 of two things.

1)  Wait until the API is ready.
2)  Litter the API with multiple functions that all do the same thing in
slightly different ways.

#2 is bogus, and all it does is to make the API harder to understand.  If
you can somehow manage to acheive binary compat without doing either of
these, then cool.  Otherwise, I would much rather see us choose option #1,
becuause it is less detrimental in the long run.

The example that has been given so far, is the ap_filter_*_register
functions, which had a function pointer added.  The suggestion was that
this could have been done by adding another function.  Yes, it could
have.  But, you wouldn't be able to get rid of those functions until
3.0.  As proof, we had functions from 1.2 that weren't removed until the
2.0 release.  Sorry, but API pollution doesn't excite me.  Between API
pollution and breaking binary compat, I choose the latter.  

Commercial companies can't do this, because users don't have the
source.  Our users do, and we can do this.  If a commercial company wants
to modify Apache to provide binary backwards compat, they can do that, and
as long as they remain within a single major version, the work shouldn't
even be that hard.

The other option, is for us to create an ap_compat.h header again.  This
would provide us with a level of source compat.  I don't know why we
haven't been keeping that file up to date.

Ryan

___
Ryan Bloom  [EMAIL PROTECTED]
550 Jean St
Oakland CA 94610
---




Re: httpd response to openssl worm

2002-09-17 Thread Joshua Slive

William A. Rowe, Jr. wrote:

> I agree it would be nice to repost an OpenSSL/mod_ssl advisory on our
> pages (mod_ssl is a sister project, after all.)
> 
> But understand that the ASF took ownership of mod_ssl for Apache 2.0,
> not 1.3, and we not married to any particular SSL library (although many
> of us are very proud of the OpenSSL project, and several major contributors
> overlap between the projects.)
> 
> So +1 to rebroadcasting mod_ssl's or OpenSSL's announce, but I'm not
> losing sleep over it.  This is clearly OpenSSL's little bugger 
> (inherited in
> part or in full by other implementations, depending on their code 
> affinity.)

Certainly I'm talking about doing this as a service to our users, not as 
an obligation.

I've updated the httpd.apache.org homepage with a few words on the 
subject.  I'll wait a couple hours before I make it live in case anyone 
who is more familiar with this stuff wants to fine-tune it.

I think it would also be a good idea to send an email to the announce@ 
lists, but I'm not pgp-enabled at the moment, so I can't do it.

Joshua.




Re: mod_custom_log exits too late?

2002-09-17 Thread Brian Pane

On Mon, 2002-09-16 at 23:32, Justin Erenkrantz wrote:
> On Mon, Sep 16, 2002 at 09:46:47AM -0700, Brian Pane wrote:
> > I disagree entirely.  There's no need to let the API keep changing
> > continuously, especially not for the sake of "correctness."  All of
> > our competitors provide API stability.  And as a result, people who
> > develop modules for, say, IIS or IPlanet don't need to worry about
> > their code breaking with every maintenance release.
> 
> I think you're missing a huge point here.
> 
> You *cannot* tell IIS or iPlanet that their APIs suck.

Of course you can.  And if you're a big enough customer, your
server vendor will take your comments seriously enough to improve
APIs in the next major release.  What a responsible vendor will
not do, however, is break all their other customers' systems in
the next maintenance release just because one customer didn't like
the function signatures.

> You probably
> can't even fix problems in their servers.  You're stuck.
> 
> You *can* tell us that our APIs suck and provide patches on ways to
> improve - especially for 2.0.

Sure.  And we have an obligation to users and third party developers
to take constructive input and work it in to future releases in a
manner that won't break people's systems.

Brian





Re: [PATCH]: Module Magic Number Glossary for Apache 2.0

2002-09-17 Thread Joshua Slive

Bojan Smojver wrote:
> I guess the HTML file would get generated automatically, but I'm
> attaching it anyway.

Thanks.  Both the FAQ and glossary entries are committed.

Joshua.





RE: DBM SSL Session Cache (Bug # 12705)

2002-09-17 Thread Amund Elstad

Thank you, I will experiment with mutex protection and see if it makes
things more stable. 

I am not very familiar with the sourcecode, but it does look very strange to
me. Even the (unused?) status function has mutex protection -:)

amund




-Original Message-
From: Jeff Trawick [mailto:[EMAIL PROTECTED]]
Sent: 17. september 2002 13:35
To: [EMAIL PROTECTED]
Subject: Re: DBM SSL Session Cache (Bug # 12705)


Amund Elstad <[EMAIL PROTECTED]> writes:

> Hi,
> 
> Can someone please explain why ssl_scache_dbm_retrieve() in
> modules/ssl/ssl_scache_dbm.c 
> is not protected by the the SSLMutex ? 
> 
> The function uses the same pool (module config) as the store/remove/expire
> operations, 
> so shouldn't it also be protected to avoid race conditions ? 
> 
> Our ssl-enabled server (httpd 2.0.40 on win32) server locks up in endless
> loops regulary, 
> and I have a trace showing that it is stuck inside pool-cleanup after
> calling dbm_close.
> (see http://nagoya.apache.org/bugzilla/show_bug.cgi?id=12705)

I can't explain why it isn't protected by the mutex.  Have you tried
protecting the code with a mutex and rebuilding?  At worse it would
slow things down a bit.

untested patch:

Index: modules/ssl/ssl_scache_dbm.c
===
RCS file: /home/cvs/httpd-2.0/modules/ssl/ssl_scache_dbm.c,v
retrieving revision 1.16
diff -u -r1.16 ssl_scache_dbm.c
--- modules/ssl/ssl_scache_dbm.c17 May 2002 11:24:17 -  1.16
+++ modules/ssl/ssl_scache_dbm.c17 Sep 2002 11:32:47 -
@@ -228,21 +228,25 @@
  * XXX: Should we open the dbm against r->pool so the cleanup will
  * do the apr_dbm_close? This would make the code a bit cleaner.
  */
+ssl_mutex_on(s);
 if ((rc = apr_dbm_open(&dbm, mc->szSessionCacheDataFile,
APR_DBM_RWCREATE, SSL_DBM_FILE_MODE, mc->pPool)) != APR_SUCCESS)
{
 ap_log_error(APLOG_MARK, APLOG_ERR, rc, s,
  "Cannot open SSLSessionCache DBM file `%s' for reading
"
  "(fetch)",
  mc->szSessionCacheDataFile);
+ssl_mutex_off(s);
 return NULL;
 }
 rc = apr_dbm_fetch(dbm, dbmkey, &dbmval);
 if (rc != APR_SUCCESS) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 if (dbmval.dptr == NULL || dbmval.dsize <= sizeof(time_t)) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 
@@ -251,12 +255,14 @@
 ucpData = (UCHAR *)malloc(nData);
 if (ucpData == NULL) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 memcpy(ucpData, (char *)dbmval.dptr+sizeof(time_t), nData);
 memcpy(&expiry, dbmval.dptr, sizeof(time_t));
 
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 
 /* make sure the stuff is still not expired */
 now = time(NULL);


-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: DBM SSL Session Cache (Bug # 12705)

2002-09-17 Thread Jeff Trawick

Amund Elstad <[EMAIL PROTECTED]> writes:

> Hi,
> 
> Can someone please explain why ssl_scache_dbm_retrieve() in
> modules/ssl/ssl_scache_dbm.c 
> is not protected by the the SSLMutex ? 
> 
> The function uses the same pool (module config) as the store/remove/expire
> operations, 
> so shouldn't it also be protected to avoid race conditions ? 
> 
> Our ssl-enabled server (httpd 2.0.40 on win32) server locks up in endless
> loops regulary, 
> and I have a trace showing that it is stuck inside pool-cleanup after
> calling dbm_close.
> (see http://nagoya.apache.org/bugzilla/show_bug.cgi?id=12705)

I can't explain why it isn't protected by the mutex.  Have you tried
protecting the code with a mutex and rebuilding?  At worse it would
slow things down a bit.

untested patch:

Index: modules/ssl/ssl_scache_dbm.c
===
RCS file: /home/cvs/httpd-2.0/modules/ssl/ssl_scache_dbm.c,v
retrieving revision 1.16
diff -u -r1.16 ssl_scache_dbm.c
--- modules/ssl/ssl_scache_dbm.c17 May 2002 11:24:17 -  1.16
+++ modules/ssl/ssl_scache_dbm.c17 Sep 2002 11:32:47 -
@@ -228,21 +228,25 @@
  * XXX: Should we open the dbm against r->pool so the cleanup will
  * do the apr_dbm_close? This would make the code a bit cleaner.
  */
+ssl_mutex_on(s);
 if ((rc = apr_dbm_open(&dbm, mc->szSessionCacheDataFile,
APR_DBM_RWCREATE, SSL_DBM_FILE_MODE, mc->pPool)) != APR_SUCCESS) {
 ap_log_error(APLOG_MARK, APLOG_ERR, rc, s,
  "Cannot open SSLSessionCache DBM file `%s' for reading "
  "(fetch)",
  mc->szSessionCacheDataFile);
+ssl_mutex_off(s);
 return NULL;
 }
 rc = apr_dbm_fetch(dbm, dbmkey, &dbmval);
 if (rc != APR_SUCCESS) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 if (dbmval.dptr == NULL || dbmval.dsize <= sizeof(time_t)) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 
@@ -251,12 +255,14 @@
 ucpData = (UCHAR *)malloc(nData);
 if (ucpData == NULL) {
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 return NULL;
 }
 memcpy(ucpData, (char *)dbmval.dptr+sizeof(time_t), nData);
 memcpy(&expiry, dbmval.dptr, sizeof(time_t));
 
 apr_dbm_close(dbm);
+ssl_mutex_off(s);
 
 /* make sure the stuff is still not expired */
 now = time(NULL);


-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



RE: Tagged and rolled 2.0.41

2002-09-17 Thread Sander Striker

> From: Mads Toftum [mailto:[EMAIL PROTECTED]]
> Sent: 17 September 2002 12:38

> On Tue, Sep 17, 2002 at 03:41:15AM +0200, Sander Striker wrote:
> > Hi,
> > 
> > I've tagged and rolled 2.0.41.  Please test the tarballs found
> > at http://httpd.apache.org/dev/dist/ and cast thy votes.
> > 
> One very small issue when building (tested on solaris 8 sparc) with
> ssl_expr_parse.y in modules/ssl. Given that the timestamp on that file
> is newer than on the output files, then make wants to run them through
> yacc again. Looking at the timestamps it seems that ssl_expr_parse.y
> has been updated by buildconf. Updating the timestamp on the output
> files makes apache build cleanly without yacc.

Grmpf.  Should we modify httpd_roll_release to touch the output files?

Samder



Re: Tagged and rolled 2.0.41

2002-09-17 Thread Mads Toftum

On Tue, Sep 17, 2002 at 03:41:15AM +0200, Sander Striker wrote:
> Hi,
> 
> I've tagged and rolled 2.0.41.  Please test the tarballs found
> at http://httpd.apache.org/dev/dist/ and cast thy votes.
> 
One very small issue when building (tested on solaris 8 sparc) with
ssl_expr_parse.y in modules/ssl. Given that the timestamp on that file
is newer than on the output files, then make wants to run them through
yacc again. Looking at the timestamps it seems that ssl_expr_parse.y
has been updated by buildconf. Updating the timestamp on the output
files makes apache build cleanly without yacc.

vh

Mads Toftum
-- 
`Darn it, who spiked my coffee with water?!' - lwall




Re: [PATCH]: mod_logio.c

2002-09-17 Thread Bojan Smojver

On Tue, 2002-09-17 at 16:20, Justin Erenkrantz wrote:

> Is there a reason that everything can't be done on the conn_rec?
> What's the reason for having the request-based input filter again?
> (Please refresh my old and decrepit memory!)  Can't we do everything
> in the connection filter?  Isn't this just going to duplicate
> everything (i.e. the conn filter sees it once, then the request
> filter sees it again)?

In the connection level filter request is NULL, so we can't store
anything into it. We need the request based filter to tie the counted
number of input bytes with the particular request.

I figured out what's going on with double counting. My test numbers were
correct and yet connection level filter was able to see all data, which
means that body of the request would get counted twice - and yet it
wasn't. The reason for that is very simple - I did some RTFM but I
wasn't paying attention to the fact that ap_get_brigade() has to be the
*very first* call in the filter... Silly!

Anyway, the correct thing to do is to count *only* in connection level
filter and then just move the note across from c->notes to r->notes in
the request level filter. I've corrected that part of the code.

Also, I think that allowing SetInputFilter and SetOutputFilter options
once the module is compiled in might create problems. If the same
connection is used for 2 different virtual hosts and one of them doesn't
have the filter configured, the c->notes will never get unset thus
creating problem. So, I think I'll use the explicit hooks instead. I've
changed that part of the code as well.

> In fact, if we make it a request filter, aren't connection-level
> filters such as SSL make the value different?  For a SSL connection,
> we have to set these values *before* (on input) or *after* (on
> output) SSL gets it.  We're only concerned with the actual bytes
> transmitted on the network socket.

Yep, you are right there as well. SSL will definitely change that. It
should be OK for input (I fixed that part), since the counting is done
only in the connection level filter. But, I'll have to fix up the output
for sure (i.e. count in the connection level filter). Not sure yet how
to do that, but will keep trying :-)

> I still think we can work it out so that the connection filters
> have a pointer to the request_rec.  (i.e. ap_read_request could
> update a pointer within the conn to indicate the current request
> *or* update all of the current input filters to point at the new
> request_rec.)  I think we can manage something that won't get
> vetoed.

If I knew how to pull that one off, I would definitely go for it :-)
Promise to have a look though.

> And, again, I'm totally in favor of having this filter update/modify
> r->bytes_sent/r->read_length (perhaps replacing read_length in favor
> of bytes_read to be consistent).  I maintain the definition of
> bytes_sent excluding the headers is remarkably lame and no one ever
> really wants that anyway.
> 
> Has anyone said it should remain broken?  If so, why?  -- justin

Actually, yes. I think someone mentioned in one of the previous threads
that r->bytes_sent should be *only* the actual length of the sent body,
not the headers. This filter would screw that up.

When I thought about all this a bit more, it seemed like a bit of an
upset to introduce two new fields into request_rec (given that
bytes_sent would have to be kept as is). Furthermore, not many people
would opt for this kind of logging, so the fields would do nothing for
them. On the other hand, notes are strings and in mod_log_config.c they
don't have to be converted into anything else but just passed on as they
are. Probably a bit slower then the request_rec field approach, but I'm
guessing it shouldn't be all that bad. And it won't upset the people
that don't want request_rec increased in size because of this.

Once again, thank you for your input. I really appreciate it. And thank
you for your patience while I'm learning Apache API!

Bojan




DBM SSL Session Cache (Bug # 12705)

2002-09-17 Thread Amund Elstad

Hi,

Can someone please explain why ssl_scache_dbm_retrieve() in
modules/ssl/ssl_scache_dbm.c 
is not protected by the the SSLMutex ? 

The function uses the same pool (module config) as the store/remove/expire
operations, 
so shouldn't it also be protected to avoid race conditions ? 

Our ssl-enabled server (httpd 2.0.40 on win32) server locks up in endless
loops regulary, 
and I have a trace showing that it is stuck inside pool-cleanup after
calling dbm_close.
(see http://nagoya.apache.org/bugzilla/show_bug.cgi?id=12705)


thanks 
amund elstad 





Re: Preserving history in dev/dist?

2002-09-17 Thread Justin Erenkrantz

On Tue, Sep 17, 2002 at 09:59:58AM +0200, Sander Striker wrote:
> Hi,
> 
> When uploading the 2.0.41 tarballs I encountered an 'old'
> folder in /www/httpd.apache.org/dev/dist.  Upon inspection
> it contains two unreleased alpha tarballs.  Do we really
> want to keep history of our unreleased tarballs?  Or can
> we just delete this directory?

I vote 'delete' as these were never formally released, hence
no need to keep them for posterity.  -- justin



Preserving history in dev/dist?

2002-09-17 Thread Sander Striker

Hi,

When uploading the 2.0.41 tarballs I encountered an 'old'
folder in /www/httpd.apache.org/dev/dist.  Upon inspection
it contains two unreleased alpha tarballs.  Do we really
want to keep history of our unreleased tarballs?  Or can
we just delete this directory?

Sander