Re: Tagging 2.0.37

2002-06-10 Thread Justin Erenkrantz

On Mon, Jun 10, 2002 at 08:44:25PM -0700, Greg Stein wrote:
> You sent me a note about the 304, but I never really participated in that
> conversation...

Indirectly, you did.  I can't commit my fix because you objected to
it.

> Looking over it...
> 
> Hmm... a lot of this stuff is just too "loose". There isn't enough
> coordination between what the server is trying to do and the resources that
> are targeted by the request. The server is unable to say, "hey. I'm going to
> modify your output [which means a 304 is improper]".

> It seems that we can look at the filter chain for AP_FTYPE_RESOURCE filters.
> If any are present, then disable the 304 response. Of course, we have to do
> this checking *after* the filter stack has been appropriately set up. That
> might be after a LAST filter on insert_filters.

As you said, each RESOURCE filter may not really change the content.
This is why I'm suggesting my approach of having each filter have
a function where they could indicate whether it can participate in
If-Modified-Since requests.  

I'm not sure how well it would work if we make it so there is any
filter with a rating < CONTENT_SET, we have to disable 304 responses.
I'm thinking something like mod_bucketeer doesn't really change the
L-M date because its output is predictable based on the file contents.
PHP and mod_include aren't like that.

> I'm not sure what the "no_last_copy" flag is about that was mentioned on the
> list.

The no_local_copy flag disables 304 responses in
ap_meets_conditions().  -- justin



which libtool?

2002-06-10 Thread Cliff Woolley


Does somebody want to tell me which machine I should build the 2.0.37
tarball on so as to minimize the libtool-bustedness across platforms?

Stick with icarus even though it's running libtool 1.3.4?

--Cliff




Re: cvs commit: httpd-2.0 CHANGES

2002-06-10 Thread Doug MacEachern

On Mon, 10 Jun 2002, Doug MacEachern wrote:
 
> i'd be surprised if 'SSLOptions +OptRengotiate' actually ever worked for 
> anybody before this change, including the 1.3 based modssl which still has 
> this issue.

i take that back a bit, i'd be surprised if it worked for anybody using 
netscape 4.xx where you can see:
- click on the security lock icon
  - click on "Navigator"
there is an option here "Certificate to identify you to a website"
the default is [Ask Every Time]

it is only an issue in that case, where the first request prompts for 
client cert, any request after that with SSLSessionCache results in 
FORBIDDEN with the "Cannot find peer certificate chain" error_log message.

this is not a problem when the netscape option is changed to 
[Select Automatically]

which i think newer versions do by default, same with IE and likely other 
clients.




Re: Tagging 2.0.37

2002-06-10 Thread Greg Stein

On Mon, Jun 10, 2002 at 05:55:01PM -0400, Cliff Woolley wrote:
> Since we're not getting anywhere on the "showstopper" 304 issue, I'm
> getting sick of putting this thing off any longer.  Unless somebody speaks
> up in the next hour or so, I'm tagging 2.0.37.

You sent me a note about the 304, but I never really participated in that
conversation...

Looking over it...

Hmm... a lot of this stuff is just too "loose". There isn't enough
coordination between what the server is trying to do and the resources that
are targeted by the request. The server is unable to say, "hey. I'm going to
modify your output [which means a 304 is improper]".

It seems that we can look at the filter chain for AP_FTYPE_RESOURCE filters.
If any are present, then disable the 304 response. Of course, we have to do
this checking *after* the filter stack has been appropriately set up. That
might be after a LAST filter on insert_filters.


I'm not sure what the "no_last_copy" flag is about that was mentioned on the
list.


Hmm. One other point to make is that the simple presence of a filter isn't
enough to disqualify 304 generation. It could be that a .shtml document has
no tags that would alter its output.

Maybe a metabucket that would say "potential for 304". Handlers and resource
filters could expend a bit of extra work to defer their hardcore work until
a 304 is positively determined (or positively negated).

Ugh...

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/



Re: cvs commit: httpd-2.0 CHANGES

2002-06-10 Thread Doug MacEachern

just a note on this, SSLOptions +OptRengotiate simulates what 
s3_srvr.c:ssl3_get_client_certificate would do when calling 
ssl_verify_cert_chain() with the certs presented by the client.

for whatever reason, when the cert chain is saved to the session cache, 
the peer cert is removed from the chain:
s->session->peer=sk_X509_shift(sk);
...
s->session->sess_cert->cert_chain=sk;
/* Inconsistency alert: cert_chain does *not* include the
 * peer's own certificate, while we do include it in s3_clnt.c */

so this workaround simply pushes the peer cert from the session cache back 
into the "chain".

i'd be surprised if 'SSLOptions +OptRengotiate' actually ever worked for 
anybody before this change, including the 1.3 based modssl which still has 
this issue.

On 11 Jun 2002 [EMAIL PROTECTED] wrote:

> dougm   2002/06/10 20:12:34
> 
>   Modified:modules/ssl ssl_engine_kernel.c
>.CHANGES
>   Log:
>   'SSLOptions +OptRengotiate' will use client cert in from the ssl
>   session cache when there is no cert chain in the cache.  prior to
>   the fix this situation would result in a FORBIDDEN response and
>   error message "Cannot find peer certificate chain"
>   
>   Revision  ChangesPath
>   1.73  +15 -0 httpd-2.0/modules/ssl/ssl_engine_kernel.c
>   
>   Index: ssl_engine_kernel.c
>   ===
>   RCS file: /home/cvs/httpd-2.0/modules/ssl/ssl_engine_kernel.c,v
>   retrieving revision 1.72
>   retrieving revision 1.73
>   diff -u -r1.72 -r1.73
>   --- ssl_engine_kernel.c 4 Jun 2002 07:12:26 -   1.72
>   +++ ssl_engine_kernel.c 11 Jun 2002 03:12:33 -  1.73
>   @@ -709,6 +709,16 @@
>
>cert_stack = (STACK_OF(X509) *)SSL_get_peer_cert_chain(ssl);
>
>   +if (!cert_stack && (cert = SSL_get_peer_certificate(ssl))) {
>   +/* client cert is in the session cache, but there is
>   + * no chain, since ssl3_get_client_certificate()
>   + * sk_X509_shift-ed the peer cert out of the chain.
>   + * we put it back here for the purpose of quick_renegotiation.
>   + */
>   +cert_stack = sk_new_null();
>   +sk_X509_push(cert_stack, cert);
>   +}
>   +
>if (!cert_stack || (sk_X509_num(cert_stack) == 0)) {
>ap_log_error(APLOG_MARK, APLOG_ERR, 0, r->server,
> "Cannot find peer certificate chain");
>   @@ -745,6 +755,11 @@
>
>SSL_set_verify_result(ssl, cert_store_ctx.error);
>X509_STORE_CTX_cleanup(&cert_store_ctx);
>   +
>   +if (cert_stack != SSL_get_peer_cert_chain(ssl)) {
>   +/* we created this ourselves, so free it */
>   +sk_X509_pop_free(cert_stack, X509_free);
>   +}
>}
>else {
>request_rec *id = r->main ? r->main : r;
>   
>   
>   
>   1.819 +6 -0  httpd-2.0/CHANGES
>   
>   Index: CHANGES
>   ===
>   RCS file: /home/cvs/httpd-2.0/CHANGES,v
>   retrieving revision 1.818
>   retrieving revision 1.819
>   diff -u -r1.818 -r1.819
>   --- CHANGES 10 Jun 2002 18:51:37 -  1.818
>   +++ CHANGES 11 Jun 2002 03:12:33 -  1.819
>   @@ -1,5 +1,11 @@
>Changes with Apache 2.0.37
>
>   +  *) 'SSLOptions +OptRengotiate' will use client cert in from the ssl
>   + session cache when there is no cert chain in the cache.  prior to
>   + the fix this situation would result in a FORBIDDEN response and
>   + error message "Cannot find peer certificate chain"
>   + [Doug MacEachern]
>   +
>  *) ap_finalize_sub_req_protocol() shouldn't send an EOS bucket if
> one was already sent.  PR 9644  [Jeff Trawick]
>
>   
>   
>   
> 




Re: [PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Doug MacEachern

try with current cvs and 'SSLOptions +OptRenegotiate' configured.
with this option enabled, modssl will use the client cert from the ssl 
session cache if one was not already sent by the client.  in this case, 
modssl will not need to read from the client since full renegotiation is 
by-passed.  this of course requires that you have SSLSessionCache of some 
sort enabled.  and that your client either sends a cert automatically or 
is first requested to send one during a GET request, from which point the 
cert will be in the session cache when any POST happens afterwards.

as for supporting POST where client cert is required on a per-location 
basis and OptRenegotiate is not enabled, i think any solution will be 
very painful to get right.  when POST data is small, setting it aside in 
memory isn't so bad, but allowing large POSTs before the client has 
actually been authenticated leaves open potential DOS attacks.  saving 
large POSTs to disk would likely  result in more potential badness.





RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

> From: Cliff Woolley [mailto:[EMAIL PROTECTED]]
> 
> On Mon, 10 Jun 2002, Ryan Bloom wrote:
> 
> > I don't have any ideas.  I can't reproduce this problem though.
I'll
> > keep debugging on my end.  Cliff, this may take some time.
> 
> Any progress?  I *can* reproduce this and am looking at it.  I
basically
> just took Paul's httpd.conf and ssl.conf (and tweaked the paths)...
though
> they're very default-looking config files for the most part.  I used
my
> config.nice, though it's largely similar to Paul's in that it enables
all
> of the modules.

Still can't dup.   :-(  I'll keep trying.

> Why is ap_run_post_read_request() returning 400?

Because that is when mod_ssl notices that the request was HTTP on the
HTTPS port.

Ryan





RE: Recursive error processing.

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Ryan Bloom wrote:

> I don't have any ideas.  I can't reproduce this problem though.  I'll
> keep debugging on my end.  Cliff, this may take some time.

Any progress?  I *can* reproduce this and am looking at it.  I basically
just took Paul's httpd.conf and ssl.conf (and tweaked the paths)... though
they're very default-looking config files for the most part.  I used my
config.nice, though it's largely similar to Paul's in that it enables all
of the modules.

Why is ap_run_post_read_request() returning 400?

--Cliff




Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Brian Pane

Jim Jagielski wrote:

>Roy T. Fielding wrote:
>  
>
>>I still think it is insane to multiply or divide every time we want to
>>use seconds.  Not a showstopper, though.
>>
>>
>>
>
>Insane? Yep. But if we require 

That's why I want to use a struct with separate fields for seconds
and microseconds.

--Brian





Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Jim Jagielski

Roy T. Fielding wrote:
> 
> I still think it is insane to multiply or divide every time we want to
> use seconds.  Not a showstopper, though.
> 

Insane? Yep. But if we require 

Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Roy T. Fielding

>>> If-Modified-Since doesn't work because an HTTP time based on
>>> seconds x10 is being compared to a file modification time
>>> based directly on microseconds.
>>
>> I thought I fixed that already!?  Oh boy, did the patch not get 
>> committed?
>> It might be sitting in the PR waiting for somebody to test it.
>>
>> I'll go check.
>
> No, I committed a patch for this on May 8.  It's still broken for you?  In
> HEAD?  On Unix or Win32?

No, I missed that you had mostly fixed it --- I had saved the original
report for later work.

I still think it is insane to multiply or divide every time we want to
use seconds.  Not a showstopper, though.

Roy




Re: [PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Justin Erenkrantz

> AFAIK, this situation isn't implemented yet for 2.x.  Currently, the server

Yes, I got hit by the clue stick from Cliff.  This is a special case
where mod_ssl wants to empty its input.

> > P.S.  core_request_config->bb shouldn't be used at all.
> 
> Oh, i see.  May I ask for some general overview of reasoning here?  How else
> may the data be passed around, short of creating a hook or adding onto a
> structure?  It was my _guess_ that this could be used, since
> ap_get_client_block() uses it already, no change would be required for that
> function.

core_request_config->bb is bogus in 2.0 as is ap_get_client_block
(but I can't seem to convince Ryan of that yet).  For true 2.0
modules should be using ap_get_brigade NOT ap_get_client_block.
The only reason we're keeping ap_get_client_block() is to ease
migration of 1.3 modules.  As you sort of figured out, every
other module and filter would have to be special-cased to
re-read your data.

What you want to do is trigger mod_ssl's input filter to read
everything it can and then leave it there in its own brigade.
When the next input call comes, it can deliver the data under
the old encryption (but already processed by OpenSSL) as well
as the new (unread) data.

DougM rewrote a lot of the SSL input/output filtering to use BIO
abstractions.  What it seems you want to do is some special
combination of:

ssl_io_hook_read
char_buffer_write

HTH, but Doug may have better ideas since he knows the code a
bit better than I do.  -- justin



Re: [PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Dan Sully

Once upon a time Nathan Friess shaped the electrons to say...

> AFAIK, this situation isn't implemented yet for 2.x.  Currently, the server
> just returns a 'forbidden' response.  There's a long comment in
> modules/ssl/ssl_engine_kernel.c which explains it all.  I'm running some
> scripts which accept data from posts, and I'd like to be able to use them
> over https where the clients use certificates to authenticate.  A
> renegotiation is required when the certificate must be presented for only
> certain URLs.  Since I made the changes -- at least for my own use -- I
> thought I'd see if they make sense and could be actually used for the
> mainstream sources.  By the way, I noticed that there is less of a problem
> with clients running Mozilla, since Mozilla seems to send the certificate
> without asking.  IE first tries without the certificate, and then
> renegotiates.

This is a problem which I've run into as well. Our "workaround" was to create
another virtual server to which our customers would send POST requests with
certificates to explicitly. This is still a problem for people using SSL
toolkits instead of browsers too. I'd love to see this fix go in for 1.3.x and 2.x

-D
--
The things you own end up owning you.



RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

I don't have any ideas.  I can't reproduce this problem though.  I'll
keep debugging on my end.  Cliff, this may take some time.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

> -Original Message-
> From: Paul J. Reder [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 10, 2002 4:51 PM
> To: [EMAIL PROTECTED]
> Subject: Re: Recursive error processing.
> 
> Bad news. I just finished running
> 
> cvs update -dP httpd-2.0;cd httpd-2.0;make
> distclean;buildconf;config.nice;make;make install
> 
> and tested it. The same thing still happens with the config I
referenced
> earlier.
> 
> Any other ideas?
> 
> Paul J. Reder wrote:
> 
> > Hmmm, I missed them. I'm updating and building now, I'll have an
answer
> > shortly
> > after dinner.
> >
> > Ryan Bloom wrote:
> >
> >>> I'm running with CVS head as of Friday morning with
> >>> OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2). I've
> >>> attached my httpd.conf, ssl.conf, and config.nice files.
> >>> I have been able to reproduce it on worker and prefork on two
> >>> different Linux boxes (both redhat 7.2).
> >>>
> >>> All I do is bring the box up and use Mozilla to send the request
> >>> "http://sol.Reders:443"; and watch the cpu start spinning.
> >>>
> >>
> >> Please update your tree.  There were changes to how Apache handles
> >> calling ap_die and ap_discard_request_body() on Friday evening.
> >>
> >> Ryan
> >>
> >>
> >>
> >>
> >
> >
> 
> 
> --
> Paul J. Reder
> ---
> "The strength of the Constitution lies entirely in the determination
of
> each
> citizen to defend it.  Only if every single citizen feels duty bound
to do
> his share in this defense are the constitutional rights secure."
> -- Albert Einstein
> 





Re: Recursive error processing.

2002-06-10 Thread Paul J. Reder

Bad news. I just finished running

cvs update -dP httpd-2.0;cd httpd-2.0;make distclean;buildconf;config.nice;make;make 
install

and tested it. The same thing still happens with the config I referenced earlier.

Any other ideas?

Paul J. Reder wrote:

> Hmmm, I missed them. I'm updating and building now, I'll have an answer 
> shortly
> after dinner.
> 
> Ryan Bloom wrote:
> 
>>> I'm running with CVS head as of Friday morning with
>>> OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2). I've
>>> attached my httpd.conf, ssl.conf, and config.nice files.
>>> I have been able to reproduce it on worker and prefork on two
>>> different Linux boxes (both redhat 7.2).
>>>
>>> All I do is bring the box up and use Mozilla to send the request
>>> "http://sol.Reders:443"; and watch the cpu start spinning.
>>>
>>
>> Please update your tree.  There were changes to how Apache handles
>> calling ap_die and ap_discard_request_body() on Friday evening.
>>
>> Ryan
>>
>>
>>
>>
> 
> 


-- 
Paul J. Reder
---
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein





Re: [PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Nathan Friess

From: "Justin Erenkrantz" <[EMAIL PROTECTED]>
Sent: Monday, June 10, 2002 4:30 PM
> On Mon, Jun 10, 2002 at 04:20:06PM -0600, Nathan Friess wrote:
> > A while back I started working with the httpd sources in attempt to
create
> > the missing code for POSTing over SSL when renegotiation is required.  I
> > made the necessary changes, tested the code using several 1 to 30
megabyte
> > binary files, and it seems to work nicely.
>
> Um, what problem are you seeing?  -- justin

AFAIK, this situation isn't implemented yet for 2.x.  Currently, the server
just returns a 'forbidden' response.  There's a long comment in
modules/ssl/ssl_engine_kernel.c which explains it all.  I'm running some
scripts which accept data from posts, and I'd like to be able to use them
over https where the clients use certificates to authenticate.  A
renegotiation is required when the certificate must be presented for only
certain URLs.  Since I made the changes -- at least for my own use -- I
thought I'd see if they make sense and could be actually used for the
mainstream sources.  By the way, I noticed that there is less of a problem
with clients running Mozilla, since Mozilla seems to send the certificate
without asking.  IE first tries without the certificate, and then
renegotiates.

>
> P.S.  core_request_config->bb shouldn't be used at all.
>

Oh, i see.  May I ask for some general overview of reasoning here?  How else
may the data be passed around, short of creating a hook or adding onto a
structure?  It was my _guess_ that this could be used, since
ap_get_client_block() uses it already, no change would be required for that
function.

Nathan






Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Cliff Woolley wrote:

> No, I committed a patch for this on May 8.  It's still broken for you?  In
> HEAD?  On Unix or Win32?

PS: See http://nagoya.apache.org/bugzilla/show_bug.cgi?id=8760 .

--Cliff




Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Cliff Woolley wrote:

> On Mon, 10 Jun 2002, Roy T. Fielding wrote:
>
> > If-Modified-Since doesn't work because an HTTP time based on
> > seconds x10 is being compared to a file modification time
> > based directly on microseconds.
>
> I thought I fixed that already!?  Oh boy, did the patch not get committed?
> It might be sitting in the PR waiting for somebody to test it.
>
> I'll go check.

No, I committed a patch for this on May 8.  It's still broken for you?  In
HEAD?  On Unix or Win32?

--Cliff




Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Roy T. Fielding wrote:

> If-Modified-Since doesn't work because an HTTP time based on
> seconds x10 is being compared to a file modification time
> based directly on microseconds.

I thought I fixed that already!?  Oh boy, did the patch not get committed?
It might be sitting in the PR waiting for somebody to test it.

I'll go check.

--Cliff




Re: last modified header

2002-06-10 Thread Joshua Slive

Jie Gao wrote:

>>This is quite ambiguous, but I think this is how it should read:
>>
>>On the Apache web server, the last modified HTTP header is returned if
>>the file is an HTML file.  If it is a SHTML (or processed by
>>mod_include), then the last modified header is only returned when
>>the SHTML file is executable. 

And the XBitHack directive is set to "Full".

> Otherwise, for a SHTML file with no
>>executable bit set, no last modified information is returned.
> 
> What would be desirable is a directive to send the last modified header
> for all files.

That would just be wrong.  When content is dynamically generated, the 
"last-modified" date would need to be the same as the current time, so 
it wouldn't serve any purpose.

Joshua.




Re: apr_time_t --> apr_time_usec_t

2002-06-10 Thread Roy T. Fielding


On Monday, June 10, 2002, at 03:22  PM, Cliff Woolley wrote:

> On Mon, 10 Jun 2002, Roy T. Fielding wrote:
>
>> I know of one existing bug in httpd that I would consider a
>> showstopper, if I were RM, due to the way APR handles time.
>
> Are you going to tell me what it is?  :)

If-Modified-Since doesn't work because an HTTP time based on
seconds x10 is being compared to a file modification time
based directly on microseconds.

Roy




Re: Recursive error processing.

2002-06-10 Thread Paul J. Reder

Hmmm, I missed them. I'm updating and building now, I'll have an answer shortly
after dinner.

Ryan Bloom wrote:

>>I'm running with CVS head as of Friday morning with
>>OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2). I've
>>attached my httpd.conf, ssl.conf, and config.nice files.
>>I have been able to reproduce it on worker and prefork on two
>>different Linux boxes (both redhat 7.2).
>>
>>All I do is bring the box up and use Mozilla to send the request
>>"http://sol.Reders:443"; and watch the cpu start spinning.
>>
> 
> Please update your tree.  There were changes to how Apache handles
> calling ap_die and ap_discard_request_body() on Friday evening.
> 
> Ryan
> 
> 
> 
> 


-- 
Paul J. Reder
---
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein





Re: last modified header

2002-06-10 Thread Jie Gao

On Mon, 10 Jun 2002, Justin Erenkrantz wrote:

> On Tue, Jun 11, 2002 at 08:41:08AM +1000, Jie Gao wrote:
> > Hi,
> >
> > >From http://www.xav.com/scripts/search/help/1068.html:
> >
> > On the Apache web server, the last modified HTTP header is returned if
> > the HTML or SHTML file is executable. If the file has only read
> > permission, then no last modified information is returned.
>
> This is quite ambiguous, but I think this is how it should read:
>
> On the Apache web server, the last modified HTTP header is returned if
> the file is an HTML file.  If it is a SHTML (or processed by
> mod_include), then the last modified header is only returned when
> the SHTML file is executable.  Otherwise, for a SHTML file with no
> executable bit set, no last modified information is returned.

Thanks.

What would be desirable is a directive to send the last modified header
for all files.

Regards,



Jie




RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

> I'm running with CVS head as of Friday morning with
> OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2). I've
> attached my httpd.conf, ssl.conf, and config.nice files.
> I have been able to reproduce it on worker and prefork on two
> different Linux boxes (both redhat 7.2).
> 
> All I do is bring the box up and use Mozilla to send the request
> "http://sol.Reders:443"; and watch the cpu start spinning.

Please update your tree.  There were changes to how Apache handles
calling ap_die and ap_discard_request_body() on Friday evening.

Ryan





Re: Recursive error processing.

2002-06-10 Thread Justin Erenkrantz

On Mon, Jun 10, 2002 at 06:52:52PM -0400, Paul J. Reder wrote:
> I'm running with CVS head as of Friday morning with
> OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2).

rbb's changes went in on 2002/06/07 22:31:34 GMT.  =)  

You should update.  -- justin



Re: Recursive error processing.

2002-06-10 Thread Paul J. Reder

I'm running with CVS head as of Friday morning with
OpenSSL 0.9.6b [engine] 9 Jul 2001 on Linux (RedHat 7.2). I've
attached my httpd.conf, ssl.conf, and config.nice files.
I have been able to reproduce it on worker and prefork on two
different Linux boxes (both redhat 7.2).

All I do is bring the box up and use Mozilla to send the request
"http://sol.Reders:443"; and watch the cpu start spinning.

Ryan Bloom wrote:

>>From: Cliff Woolley [mailto:[EMAIL PROTECTED]]
>>
>>On Mon, 10 Jun 2002, Ryan Bloom wrote:
>>
>>
>>>Please make sure that your code is up to date, because the server is
>>>supposed to have logic that protects us from getting into an
>>>
> infinite
> 
>>>loop.
>>>
>>Paul, I notice the line numbers in your back trace don't quite match
>>
> up
> 
>>with mine... is this HEAD?  Or are there local mods?
>>
>>
>>>Wait a sec, the problem could be the ErrorDocument path.  The test
>>>
> suite
> 
>>>doesn't exercise that path.  Will report back soon.
>>>
>>Ah.  Well I'll wait for Ryan to check that then.
>>
> 
> I've tried everything I can think of to make this fail.  It refuses to
> fail for me.  Please make sure that your code is up to date, and let me
> know what version of the SSL libraries you are using.  For completeness,
> here are my test cases:
> 
> 1)  Run the test suite  (this tests http://localhost:8350 where 8350 is
> the SSL port).  Also requested a page through telnet and Konqueror.
> 
> 2)  Add a plain text ErrorDocument for 400 requests.  Request a page
> 
> 3)  Copy the HTTP_BAD_REQUEST.html.var files and the config to my test
> server, request a page.
> 
> All three scenarios work for me on Linux.  There is a problem in the 3rd
> case, which looks to be from a non-terminated string (bad, but not a
> buffer overflow, we just forgot to add a \0).  I'll fix that quickly.
> Paul or Allen, can either of you provide more details?  There really is
> logic in the server to stop the ap_die calls from being recursive, so
> this bug really surprises me.
> 
> Ryan
> 
> 
> 
> 


-- 
Paul J. Reder
---
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein



ServerName sol.Reders
#
# Based upon the NCSA server configuration files originally by Rob McCool.
#
# This is the main Apache server configuration file.  It contains the
# configuration directives that give the server its instructions.
# See http://httpd.apache.org/docs-2.0/> for detailed information about
# the directives.
#
# Do NOT simply read the instructions in here without understanding
# what they do.  They're here only as hints or reminders.  If you are unsure
# consult the online docs. You have been warned.  
#
# The configuration directives are grouped into three basic sections:
#  1. Directives that control the operation of the Apache server process as a
# whole (the 'global environment').
#  2. Directives that define the parameters of the 'main' or 'default' server,
# which responds to requests that aren't handled by a virtual host.
# These directives also provide default values for the settings
# of all virtual hosts.
#  3. Settings for virtual hosts, which allow Web requests to be sent to
# different IP addresses or hostnames and have them handled by the
# same Apache server process.
#
# Configuration and logfile names: If the filenames you specify for many
# of the server's control files begin with "/" (or "drive:/" for Win32), the
# server will use that explicit path.  If the filenames do *not* begin
# with "/", the value of ServerRoot is prepended -- so "logs/foo.log"
# with ServerRoot set to "/usr/local/apache" will be interpreted by the
# server as "/usr/local/apache/logs/foo.log".
#

### Section 1: Global Environment
#
# The directives in this section affect the overall operation of Apache,
# such as the number of concurrent requests it can handle or where it
# can find its configuration files.
#

#
# ServerRoot: The top of the directory tree under which the server's
# configuration, error, and log files are kept.
#
# NOTE!  If you intend to place this on an NFS (or otherwise network)
# mounted filesystem then please read the LockFile documentation
# (available at http://httpd.apache.org/docs-2.0/mod/core.html#lockfile>);
# you will save yourself a lot of trouble.
#
# Do NOT add a slash at the end of the directory path.
#
ServerRoot "/home/rederpj/Apache"

#
# The accept serialization lock file MUST BE STORED ON A LOCAL DISK.
#


#LockFile logs/accept.lock



#
# ScoreBoardFile: File used to store internal server process information.
# Not all architectures require this.  But if yours does (you'll know because
# this file will be  created when you run Apache) then you *must* ensure that
# no two invocations of Apache share the same scoreboard file.
#



Scor

Re: last modified header

2002-06-10 Thread Justin Erenkrantz

On Tue, Jun 11, 2002 at 08:41:08AM +1000, Jie Gao wrote:
> Hi,
> 
> >From http://www.xav.com/scripts/search/help/1068.html:
> 
> On the Apache web server, the last modified HTTP header is returned if
> the HTML or SHTML file is executable. If the file has only read
> permission, then no last modified information is returned.

This is quite ambiguous, but I think this is how it should read:

On the Apache web server, the last modified HTTP header is returned if
the file is an HTML file.  If it is a SHTML (or processed by
mod_include), then the last modified header is only returned when
the SHTML file is executable.  Otherwise, for a SHTML file with no
executable bit set, no last modified information is returned.

HTH.  

In the end, we are doing the 'correct' thing.  -- justin



Re: last modified header

2002-06-10 Thread Aaron Bannert

On Tue, Jun 11, 2002 at 08:41:08AM +1000, Jie Gao wrote:
> >From http://www.xav.com/scripts/search/help/1068.html:
> 
> On the Apache web server, the last modified HTTP header is returned if
> the HTML or SHTML file is executable. If the file has only read
> permission, then no last modified information is returned.
> 
> If this is still the case, it is impossible to take advantage of
> Solaris' priority_paging if "last modified" header is also wanted.

Why do you want priority paging on HTML/SHTML files?

-aaron



RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

> From: Cliff Woolley [mailto:[EMAIL PROTECTED]]
> 
> On Mon, 10 Jun 2002, Ryan Bloom wrote:
> 
> > Please make sure that your code is up to date, because the server is
> > supposed to have logic that protects us from getting into an
infinite
> > loop.
> 
> Paul, I notice the line numbers in your back trace don't quite match
up
> with mine... is this HEAD?  Or are there local mods?
> 
> > Wait a sec, the problem could be the ErrorDocument path.  The test
suite
> > doesn't exercise that path.  Will report back soon.
> 
> Ah.  Well I'll wait for Ryan to check that then.

I've tried everything I can think of to make this fail.  It refuses to
fail for me.  Please make sure that your code is up to date, and let me
know what version of the SSL libraries you are using.  For completeness,
here are my test cases:

1)  Run the test suite  (this tests http://localhost:8350 where 8350 is
the SSL port).  Also requested a page through telnet and Konqueror.

2)  Add a plain text ErrorDocument for 400 requests.  Request a page

3)  Copy the HTTP_BAD_REQUEST.html.var files and the config to my test
server, request a page.

All three scenarios work for me on Linux.  There is a problem in the 3rd
case, which looks to be from a non-terminated string (bad, but not a
buffer overflow, we just forgot to add a \0).  I'll fix that quickly.
Paul or Allen, can either of you provide more details?  There really is
logic in the server to stop the ap_die calls from being recursive, so
this bug really surprises me.

Ryan





last modified header

2002-06-10 Thread Jie Gao

Hi,

>From http://www.xav.com/scripts/search/help/1068.html:

On the Apache web server, the last modified HTTP header is returned if
the HTML or SHTML file is executable. If the file has only read
permission, then no last modified information is returned.

If this is still the case, it is impossible to take advantage of
Solaris' priority_paging if "last modified" header is also wanted.

Can this be changed to send "last modified" header everytime?

>  All priority paging does is set a new kernel parameter, cachefree, to
> twice the value of lotsfree (without priority paging, lotsfree is the
> parameter which kicks in page scanning).  With this set, the page scanner
> kicks in at cachefree, but will skip over pages of memory which it
> determines to be part of an executable program.  Care must be taken,
> because the way priority paging determines if a page is part of an
> executable is if it comes from a file with the execute bit set.  Therefore,
> if a data file has the execute bit set, priority paging will think it's an
> executable and skip over it.

Thanks,


Jie




Re: [PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Justin Erenkrantz

On Mon, Jun 10, 2002 at 04:20:06PM -0600, Nathan Friess wrote:
> A while back I started working with the httpd sources in attempt to create
> the missing code for POSTing over SSL when renegotiation is required.  I
> made the necessary changes, tested the code using several 1 to 30 megabyte
> binary files, and it seems to work nicely.

Um, what problem are you seeing?  -- justin

P.S.  core_request_config->bb shouldn't be used at all.



[PATCH] SSL, POST, and renegotiation

2002-06-10 Thread Nathan Friess

A while back I started working with the httpd sources in attempt to create
the missing code for POSTing over SSL when renegotiation is required.  I
made the necessary changes, tested the code using several 1 to 30 megabyte
binary files, and it seems to work nicely.

The body is sucked up with ap_get_client_block() and related calls, and
added to a brigade, which is placed in core_request_config->bb -- the same
place that ap_get_client_block() uses.  Since mod_cgi[d] now uses the
brigades instead of ap_... calls to get the body, that code needed to be
updated to use the core_request_config->bb brigade first (which makes sense
to me whether used with SSL or not).  ap_discard_request_body() also needed
to be updated for the same reason.

Below is the entire patch against the 'latest' CVS.  If the patch looks
good, then feel free to add it.  If something looks wrong, please let me
know, as I'm interested in continuing to work on this and other useful
changes.

Nathan

Index: modules/generators/mod_cgi.c
===
RCS file: /home/cvspublic/httpd-2.0/modules/generators/mod_cgi.c,v
retrieving revision 1.141
diff -u -r1.141 mod_cgi.c
--- modules/generators/mod_cgi.c 6 Jun 2002 00:16:59 - 1.141
+++ modules/generators/mod_cgi.c 10 Jun 2002 20:41:34 -
@@ -604,6 +604,9 @@
 cgi_server_conf *conf;
 apr_status_t rv;
 cgi_exec_info_t e_info;
+core_request_config *req_cfg =
+  (core_request_config *)ap_get_module_config(r->request_config,
+&core_module);

 if(strcmp(r->handler, CGI_MAGIC_TYPE) && strcmp(r->handler,
"cgi-script"))
 return DECLINED;
@@ -684,7 +687,7 @@
 /* Transfer any put/post args, CERN style...
  * Note that we already ignore SIGPIPE in the core server.
  */
-bb = apr_brigade_create(r->pool, r->connection->bucket_alloc);
+bb = req_cfg->bb;
 seen_eos = 0;
 child_stopped_reading = 0;
 if (conf->logname) {
@@ -694,12 +697,14 @@
 do {
 apr_bucket *bucket;

-rv = ap_get_brigade(r->input_filters, bb, AP_MODE_READBYTES,
-APR_BLOCK_READ, HUGE_STRING_LEN);
+ if (APR_BRIGADE_EMPTY(bb)) {
+   rv = ap_get_brigade(r->input_filters, bb, AP_MODE_READBYTES,
+ APR_BLOCK_READ, HUGE_STRING_LEN);

-if (rv != APR_SUCCESS) {
+   if (rv != APR_SUCCESS) {
 return rv;
-}
+   }
+ }

 APR_BRIGADE_FOREACH(bucket, bb) {
 const char *data;
@@ -746,7 +751,6 @@
 child_stopped_reading = 1;
 }
 }
-apr_brigade_cleanup(bb);
 }
 while (!seen_eos);

@@ -764,6 +768,7 @@
 char sbuf[MAX_STRING_LEN];
 int ret;

+ bb = apr_brigade_create(r->pool, c->bucket_alloc);
 b = apr_bucket_pipe_create(script_in, c->bucket_alloc);
 APR_BRIGADE_INSERT_TAIL(bb, b);
 b = apr_bucket_eos_create(c->bucket_alloc);
Index: modules/generators/mod_cgid.c
===
RCS file: /home/cvspublic/httpd-2.0/modules/generators/mod_cgid.c,v
retrieving revision 1.134
diff -u -r1.134 mod_cgid.c
--- modules/generators/mod_cgid.c 30 May 2002 05:42:46 - 1.134
+++ modules/generators/mod_cgid.c 10 Jun 2002 20:41:36 -
@@ -1026,6 +1026,9 @@
 int sd;
 char **env;
 apr_file_t *tempsock;
+core_request_config *req_cfg =
+  (core_request_config *)ap_get_module_config(r->request_config,
+&core_module);

 if (strcmp(r->handler,CGI_MAGIC_TYPE) &&
strcmp(r->handler,"cgi-script"))
 return DECLINED;
@@ -1110,7 +1113,7 @@
 /* Transfer any put/post args, CERN style...
  * Note that we already ignore SIGPIPE in the core server.
  */
-bb = apr_brigade_create(r->pool, r->connection->bucket_alloc);
+bb = req_cfg->bb;
 seen_eos = 0;
 child_stopped_reading = 0;
 if (conf->logname) {
@@ -1121,12 +1124,14 @@
 apr_bucket *bucket;
 apr_status_t rv;

-rv = ap_get_brigade(r->input_filters, bb, AP_MODE_READBYTES,
-APR_BLOCK_READ, HUGE_STRING_LEN);
+ if (APR_BRIGADE_EMPTY(bb)) {
+   rv = ap_get_brigade(r->input_filters, bb, AP_MODE_READBYTES,
+ APR_BLOCK_READ, HUGE_STRING_LEN);

-if (rv != APR_SUCCESS) {
+   if (rv != APR_SUCCESS) {
 return rv;
-}
+   }
+ }

 APR_BRIGADE_FOREACH(bucket, bb) {
 const char *data;
@@ -1173,7 +1178,6 @@
 child_stopped_reading = 1;
 }
 }
-apr_brigade_cleanup(bb);
 }
 while (!seen_eos);

Index: modules/http/http_protocol.c
===
RCS file: /home/cvspublic/httpd-2.0/modules/http/http_protocol.c,v
retrieving revision 1.434
diff -u -r1.434 http_protocol.c
--- modules/http/http_protocol.c 8 Jun 2002 04:36:05 - 1.434
+++ modules/http/http_protocol.c 10 Jun 2002 20:41:40 -
@@ -1867,7 +1867,10 @@
  */
 AP

RE: Recursive error processing.

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Ryan Bloom wrote:

> Please make sure that your code is up to date, because the server is
> supposed to have logic that protects us from getting into an infinite
> loop.

Paul, I notice the line numbers in your back trace don't quite match up
with mine... is this HEAD?  Or are there local mods?

> Wait a sec, the problem could be the ErrorDocument path.  The test suite
> doesn't exercise that path.  Will report back soon.

Ah.  Well I'll wait for Ryan to check that then.

--Cliff




RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

I can't reproduce this.  This test case is actually tested for in the
test suite.  Which SSL library are you using?  I was going off of the
assumption that the ap_discard_request_body() changes had broken this,
but since I have the most up-to-date code, I don't believe that the two
are related.

Please make sure that your code is up to date, because the server is
supposed to have logic that protects us from getting into an infinite
loop.

Wait a sec, the problem could be the ErrorDocument path.  The test suite
doesn't exercise that path.  Will report back soon.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

> -Original Message-
> From: Ryan Bloom [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 10, 2002 2:55 PM
> To: [EMAIL PROTECTED]
> Subject: RE: Recursive error processing.
> 
> > From: Paul J. Reder [mailto:[EMAIL PROTECTED]]
> >
> > While Allan Edwards and I were doing some testing of SSL we ran into
a
> > case
> > where we were able to send Apache into an infinite loop which
> eventually
> > consumed the machine's resources.
> >
> > The problem occurs if you send a request to
> "http://some.where.com:443";
> > (instead
> > of "https://some.where.com:443";.
> 
> This was working a few days ago, and is in fact a part of the test
> suite.
> 
> > The problem seems to be related to the fact that ap_die should be
> killing
> > the custom_response and just dropping the connection (which is what
> 1.3
> > does)
> > rather than falling through and trying to send a custom response via
> > internal_redirect.
> 
> 1.3 doesn't drop the connection, it sends a custom response.
> 
> > Is this an artifact of the recent changes for 401/413 processing? Is
> this
> > symptomatic
> > of a bigger problem of infinite loops during error redirects?
> >
> > This all starts because the SSL post_read_request hook function
> > (ssl_hook_ReadReq)
> > returns HTTP_BAD_REQUEST after finding sslconn->non_ssl_request set
to
> 1
> > (by
> > ssl_io_filter_input after it notices ssl_connect fails in
> > ssl_hook_process_connection).
> 
> Hold on, I think I know what the problem is, I'll try to commit a fix
in
> a few minutes.
> 
> Ryan
> 





RE: Tagging 2.0.37

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Ryan Bloom wrote:

> The issue Paul just raised is a show stopper.  Give me a few minutes to
> track it down please.

No prob.




RE: Tagging 2.0.37

2002-06-10 Thread Ryan Bloom

The issue Paul just raised is a show stopper.  Give me a few minutes to
track it down please.

Ryan

--
Ryan Bloom  [EMAIL PROTECTED]
645 Howard St.  [EMAIL PROTECTED]
San Francisco, CA 

> -Original Message-
> From: Cliff Woolley [mailto:[EMAIL PROTECTED]]
> Sent: Monday, June 10, 2002 2:55 PM
> To: [EMAIL PROTECTED]
> Subject: Tagging 2.0.37
> 
> 
> Since we're not getting anywhere on the "showstopper" 304 issue, I'm
> getting sick of putting this thing off any longer.  Unless somebody
speaks
> up in the next hour or so, I'm tagging 2.0.37.
> 
> --Cliff





Tagging 2.0.37

2002-06-10 Thread Cliff Woolley


Since we're not getting anywhere on the "showstopper" 304 issue, I'm
getting sick of putting this thing off any longer.  Unless somebody speaks
up in the next hour or so, I'm tagging 2.0.37.

--Cliff




RE: Recursive error processing.

2002-06-10 Thread Ryan Bloom

> From: Paul J. Reder [mailto:[EMAIL PROTECTED]]
>
> While Allan Edwards and I were doing some testing of SSL we ran into a
> case
> where we were able to send Apache into an infinite loop which
eventually
> consumed the machine's resources.
> 
> The problem occurs if you send a request to
"http://some.where.com:443";
> (instead
> of "https://some.where.com:443";.

This was working a few days ago, and is in fact a part of the test
suite.

> The problem seems to be related to the fact that ap_die should be
killing
> the custom_response and just dropping the connection (which is what
1.3
> does)
> rather than falling through and trying to send a custom response via
> internal_redirect.

1.3 doesn't drop the connection, it sends a custom response.

> Is this an artifact of the recent changes for 401/413 processing? Is
this
> symptomatic
> of a bigger problem of infinite loops during error redirects?
> 
> This all starts because the SSL post_read_request hook function
> (ssl_hook_ReadReq)
> returns HTTP_BAD_REQUEST after finding sslconn->non_ssl_request set to
1
> (by
> ssl_io_filter_input after it notices ssl_connect fails in
> ssl_hook_process_connection).

Hold on, I think I know what the problem is, I'll try to commit a fix in
a few minutes.

Ryan





Recursive error processing.

2002-06-10 Thread Paul J. Reder

While Allan Edwards and I were doing some testing of SSL we ran into a case
where we were able to send Apache into an infinite loop which eventually
consumed the machine's resources.

The problem occurs if you send a request to "http://some.where.com:443"; (instead
of "https://some.where.com:443";.

Apache tries to return a custom error but gets confused while trying to
send the error response to a non-secure request over an apparently secure
connection.

A small snip of the back trace follows:

#590 0x080a1549 in ap_die (type=400, r=0x81e0e58) at http_request.c:198
#591 0x080a1b51 in internal_internal_redirect (new_uri=0x8188770 
"/error/HTTP_BAD_REQUEST.html.var", r=0x81e0650) at http_request.c:408
#592 0x080a1e10 in ap_internal_redirect (new_uri=0x8188770 
"/error/HTTP_BAD_REQUEST.html.var", r=0x81e0650) at http_request.c:483
#593 0x080a1549 in ap_die (type=400, r=0x81e0650) at http_request.c:198
#594 0x080a1b51 in internal_internal_redirect (new_uri=0x8188770 
"/error/HTTP_BAD_REQUEST.html.var", r=0x81df1c0) at http_request.c:408
#595 0x080a1e10 in ap_internal_redirect (new_uri=0x8188770 
"/error/HTTP_BAD_REQUEST.html.var", r=0x81df1c0) at http_request.c:483
#596 0x080a1549 in ap_die (type=400, r=0x81df1c0) at http_request.c:198
#597 0x080e4c24 in ap_read_request (conn=0x81cce58) at protocol.c:982
#598 0x0809bc63 in ap_process_http_connection (c=0x81cce58) at http_core.c:284
#599 0x080dfc17 in ap_run_process_connection (c=0x81cce58) at connection.c:85
#600 0x080dffbe in ap_process_connection (c=0x81cce58, csd=0x81ccd88) at 
connection.c:207
#601 0x080d2123 in child_main (child_num_arg=0) at prefork.c:671
#602 0x080d21f4 in make_child (s=0x8132908, slot=0) at prefork.c:711
---Type  to continue, or q  to quit---
#603 0x080d2321 in startup_children (number_to_start=5) at prefork.c:783
#604 0x080d274b in ap_mpm_run (_pconf=0x812e9f0, plog=0x8172b00, s=0x8132908) at 
prefork.c:999
#605 0x080d8e98 in main (argc=5, argv=0xb884) at main.c:646
#606 0x402be627 in __libc_start_main (main=0x80d8600 , argc=5, 
ubp_av=0xb884, init=0x80658f0 <_init>, fini=0x80f81e0 <_fini>, 
rtld_fini=0x4000dcc4 <_dl_fini>,
 stack_end=0xb87c) at ../sysdeps/generic/libc-start.c:129

The loop (ap_die, ap_internal_redirect, internal_internal_redirect, ap_die...) happens
until the system dies (due to newly allocated request_recs).

The problem seems to be related to the fact that ap_die should be killing
the custom_response and just dropping the connection (which is what 1.3 does)
rather than falling through and trying to send a custom response via internal_redirect.

Is this an artifact of the recent changes for 401/413 processing? Is this symptomatic
of a bigger problem of infinite loops during error redirects?

This all starts because the SSL post_read_request hook function (ssl_hook_ReadReq)
returns HTTP_BAD_REQUEST after finding sslconn->non_ssl_request set to 1 (by
ssl_io_filter_input after it notices ssl_connect fails in ssl_hook_process_connection).

Thanks for any pointers here.

-- 
Paul J. Reder
---
"The strength of the Constitution lies entirely in the determination of each
citizen to defend it.  Only if every single citizen feels duty bound to do
his share in this defense are the constitutional rights secure."
-- Albert Einstein





Re: [PHP-DEV] Re: PHP profiling results under 2.0.37 Re: Performance of Apache 2.0 Filter

2002-06-10 Thread William A. Rowe, Jr.

At 04:08 AM 6/8/2002, Andi Gutmans wrote:

>I just checked and it seems like Apache APR memory pools use mutex locking.
>It'd be better to use functions like the Win32 ones which don't use mutex 
>locking (as we made sure that only one thread allocates from its pool). 
>This could be achieved by compiling apr_pools.c without APR_HAS_THREADS 
>but I bet the default Apache 2 build has this enabled.

It's still pretty much a non-issue.  Although we've discussed thread-specific
allocators [that don't lock for allocation at all] win32 uses 
CriticalSections by
default, which adds 10 cpu instructions or so to obtain an uncontested mutex.

This probably would hurt Unix, so you might be interested in apr_pools
discussion of the apr_allocator approaches.  All these low-level discussions
are on APR, so I'm directing this discussion to that platform.

Nice to see another library leaning on APR [or at least, the Zend deployment
for PHP/Apache :-]

Bill






Re: Problems with Apache 2.0.3.x as service on WinXP

2002-06-10 Thread William A. Rowe, Jr.

At 08:39 AM 6/9/2002, you wrote:
>"William A. Rowe, Jr." wrote:
> >
> > Juergen,
> >
> >Yes,  isn't even a path.  Try .  Same
> > for DocumentRoot.  "r:" says "The current working directory, on r:" which
> > is absolutely meaningless for a service (and too vague for general 
> practice.)
>thank you,
>but this does not solve the problem with Apache:
>Syntax error:  path is invalid.
>Syntax error: DocumentRoot must be a directory

I will give you that this _IS_ a problem.  That's like the system 
responding that
 is invalid, while of course it is just fine.

Will research by tonight, perhaps this can be fixed by .38 :-)






Re: how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Jeff Trawick

"Ryan Bloom" <[EMAIL PROTECTED]> writes:

> > >  void ap_finalize_sub_req_protocol(request_rec *sub)
> > >  {
> > > -end_output_stream(sub);
> > > +/* tell the filter chain there is no more content coming */
> > > +if (!sub->eos_sent) {
> > > +end_output_stream(sub);
> > > +}
> > >  }
> > 
> > It probably should have been added here back in Sept 2000 when you
> > added the check to ap_finalize_request_protocol().  I'll add it for
> > the subrequest path now.
> 
> Yeah, it should have been added at the same time.

weird that it went for so long without being noticed...

Thanks for the sanity checking...
-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



RE: how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Ryan Bloom

> From: [EMAIL PROTECTED] [mailto:trawick@rdu88-251-


> Jeff Trawick <[EMAIL PROTECTED]> writes:
> 
> > I suspect you're talking about this line of code which doesn't exist
> > in CVS:
> >
> > Index: server/protocol.c
> > ===
> > RCS file: /home/cvs/httpd-2.0/server/protocol.c,v
> > retrieving revision 1.105
> > diff -u -r1.105 protocol.c
> > --- server/protocol.c   7 Jun 2002 22:31:34 -   1.105
> > +++ server/protocol.c   10 Jun 2002 18:33:54 -
> > @@ -1033,7 +1033,10 @@
> >
> >  void ap_finalize_sub_req_protocol(request_rec *sub)
> >  {
> > -end_output_stream(sub);
> > +/* tell the filter chain there is no more content coming */
> > +if (!sub->eos_sent) {
> > +end_output_stream(sub);
> > +}
> >  }
> 
> It probably should have been added here back in Sept 2000 when you
> added the check to ap_finalize_request_protocol().  I'll add it for
> the subrequest path now.

Yeah, it should have been added at the same time.

Ryan





Re: how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Jeff Trawick

Jeff Trawick <[EMAIL PROTECTED]> writes:

> I suspect you're talking about this line of code which doesn't exist
> in CVS:
> 
> Index: server/protocol.c
> ===
> RCS file: /home/cvs/httpd-2.0/server/protocol.c,v
> retrieving revision 1.105
> diff -u -r1.105 protocol.c
> --- server/protocol.c 7 Jun 2002 22:31:34 -   1.105
> +++ server/protocol.c 10 Jun 2002 18:33:54 -
> @@ -1033,7 +1033,10 @@
>  
>  void ap_finalize_sub_req_protocol(request_rec *sub)
>  {
> -end_output_stream(sub);
> +/* tell the filter chain there is no more content coming */
> +if (!sub->eos_sent) {
> +end_output_stream(sub);
> +}
>  }

It probably should have been added here back in Sept 2000 when you
added the check to ap_finalize_request_protocol().  I'll add it for
the subrequest path now.

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Jeff Trawick

"Ryan Bloom" <[EMAIL PROTECTED]> writes:

> > From: [EMAIL PROTECTED] [mailto:trawick@rdu88-251-
> > 
> > Initially I would think that a filter should see at most one EOS.
> > mod_ext_filter doesn't have logic to ignore subsequent ones, resulting
> > in a superfluous error message from a failed syscall when it tries to
> > re-do some cleanup when it hits a second EOS.
> > 
> > In this case, the subrequest is handled by default_handler which
> > passes down a FILE bucket and an EOS bucket.  After that has
> > completed, ap_finalize_sub_req_protocol() passes down another EOS
> > bucket.  Why does ap_finalize_sub_req_protocol() pass down an EOS?
> > Isn't the handler responsible for that?  Is this to clean up in case
> > the handler encountered an error and failed to pass down an EOS?
> 
> Output filters can only support and expect a single EOS bucket.  Input
> filters, however, seem to be moving to a multi-EOS model.

okay so far

> Ap_finalize_sub_req_protocol sends down an EOS bucket just like
> ap_finalize_request does.  That means that it is only sent if the
> handler didn't send it.

I suspect you're talking about this line of code which doesn't exist
in CVS:

Index: server/protocol.c
===
RCS file: /home/cvs/httpd-2.0/server/protocol.c,v
retrieving revision 1.105
diff -u -r1.105 protocol.c
--- server/protocol.c   7 Jun 2002 22:31:34 -   1.105
+++ server/protocol.c   10 Jun 2002 18:33:54 -
@@ -1033,7 +1033,10 @@
 
 void ap_finalize_sub_req_protocol(request_rec *sub)
 {
-end_output_stream(sub);
+/* tell the filter chain there is no more content coming */
+if (!sub->eos_sent) {
+end_output_stream(sub);
+}
 }
 
 /* finalize_request_protocol is called at completion of sending the

Is that what you expected was there?  (the PR 9644 scenario is fine
with this patch; I'm poking around now to see if we used to have this
check and removed it for some reason)

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



RE: code sharing in authentication

2002-06-10 Thread Cliff Woolley

On Mon, 10 Jun 2002, Rob Emanuele wrote:

> I was wondering if they use each other or can use each other?
> Can they share code?  For example the mod_auth_digest module and
> the mod_auth_mysql or mod_auth_dbm, can the latter modules make
> use of the digest code?

As they're currently written, no, they can't use each others' code.  The
addition of some optional functions might make it possible for them to do
so... though it has been stated that one of the goals for Apache 2.1 or
3.0 is to redo the auth/access modules so that they are more modularized
in this way, so I don't know how much effort it's worth putting into it at
this point.

--Cliff




RE: how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Ryan Bloom

> From: [EMAIL PROTECTED] [mailto:trawick@rdu88-251-
> 
> Initially I would think that a filter should see at most one EOS.
> mod_ext_filter doesn't have logic to ignore subsequent ones, resulting
> in a superfluous error message from a failed syscall when it tries to
> re-do some cleanup when it hits a second EOS.
> 
> In this case, the subrequest is handled by default_handler which
> passes down a FILE bucket and an EOS bucket.  After that has
> completed, ap_finalize_sub_req_protocol() passes down another EOS
> bucket.  Why does ap_finalize_sub_req_protocol() pass down an EOS?
> Isn't the handler responsible for that?  Is this to clean up in case
> the handler encountered an error and failed to pass down an EOS?

Output filters can only support and expect a single EOS bucket.  Input
filters, however, seem to be moving to a multi-EOS model.
Ap_finalize_sub_req_protocol sends down an EOS bucket just like
ap_finalize_request does.  That means that it is only sent if the
handler didn't send it.  The sub-request's EOS is stripped off by the
SUB_REQ_FILTER, and is only used to signify the end of the sub-request.

Ryan





RE: code sharing in authentication

2002-06-10 Thread Rob Emanuele

Does anyone have any answers here?  Or am I asking this question
to the wrong list?

Thanks,  Rob

-Original Message-
From: Rob Emanuele [mailto:[EMAIL PROTECTED]]
Sent: Friday, June 07, 2002 3:27 PM
To: [EMAIL PROTECTED]
Subject: code sharing in authentication


I'm curious to the inner workings of the authentication modules
for 1.3 and 2.0.

I was wondering if they use each other or can use each other?
Can they share code?  For example the mod_auth_digest module and
the mod_auth_mysql or mod_auth_dbm, can the latter modules make
use of the digest code?

I'd like to use digest authentication against a mysql database.
Will the require me to merge the code of the digest and mysql
module into a completely new module?  Or can I make the mysql
module use the functions of the digest module.

Thanks,

Rob Emanuele




how many EOS buckets should a filter expect? (subrequest, PR 9644)

2002-06-10 Thread Jeff Trawick

Initially I would think that a filter should see at most one EOS.
mod_ext_filter doesn't have logic to ignore subsequent ones, resulting
in a superfluous error message from a failed syscall when it tries to
re-do some cleanup when it hits a second EOS.

In this case, the subrequest is handled by default_handler which
passes down a FILE bucket and an EOS bucket.  After that has
completed, ap_finalize_sub_req_protocol() passes down another EOS
bucket.  Why does ap_finalize_sub_req_protocol() pass down an EOS?
Isn't the handler responsible for that?  Is this to clean up in case
the handler encountered an error and failed to pass down an EOS?

-- 
Jeff Trawick | [EMAIL PROTECTED]
Born in Roswell... married an alien...



Re: [PHP-DEV] RE: PHP profiling results under 2.0.37 Re: Performance of Apache 2.0 Filter

2002-06-10 Thread Aaron Bannert

On Mon, Jun 10, 2002 at 09:29:58AM -0700, Aaron Bannert wrote:
>- creation (called once per server lifetime)
>- malloc (called many times per request)
>- free (called many times per request)
>- end-of-request (called many times per request)

(Whoops, that should have been -- called once at the end of the request)

>- destruction (called once per serve lifetime)

-a



Re: [PHP-DEV] RE: PHP profiling results under 2.0.37 Re: Performance of Apache 2.0 Filter

2002-06-10 Thread Aaron Bannert

On Mon, Jun 10, 2002 at 11:46:46AM +0300, Zeev Suraski wrote:
> What we need for efficient thread-safe operation is a mechanism like the 
> Win32 heaps - mutexless heaps, that provide malloc and free services on a 
> (preferably) contiguous pre-allocated block of memory.  The question is 
> whether the APR allocators fall into that category:
> 
> 1.  Can you make them mutexless completely?  I.e., will they never call 
> malloc()?

APR's pools only use malloc() as a portable way to retrieve large
chunks of heapspace that are never returned. I don't know of any
other portable way to do this.

In any case, at some level you will always have a mutex. Either you
are mapping new segments in to the memory space of the process, or
you are dealing with freelists in userspace. APR pools take the
approach that by doing more up-front segment mapping and delayed
freeing of chunks, we avoid many of the mutexes and overhead of
freelist management. It's still got to happen somewhere though.

> 2.  They definitely do provide alloc/free services, we're ok there

Pretty much, but it's not exactly the same. I'll outline some thoughts
on a potential memory allocation abstraction that could be implemented
w/ apr_pools or Win32 heaps below...

> 3.  As far as I can tell, they don't use a contiguous block of memory, 
> which means more fragmentation...

I'm not sure how contiguity relates to fragmentation. With a pool
you can do mallocs all day long, slowly accumulating more 8K blocks
(which may or may not be contiguous). At the end of the pool lifetime
(let's say, for example, at the end of a request) then those blocks
are placed on a freelist, and the sub-partitions within those blocks
are simply forgotten. On the next request, the process starts over again.


I think to properly abstract a memory allocation scheme that can be
implemented in a way that is optimized for the particular SAPI module,
we'll have to abstract out a few concepts. This list is not exhaustive,
but is just a quick sketch based on my understanding of Win32 heaps
and APR pools:

   - creation (called once per server lifetime)
   - malloc (called many times per request)
   - free (called many times per request)
   - end-of-request (called many times per request)
   - destruction (called once per serve lifetime)

Does this cover all our bases? For example, when using pools, the
free() call would do nothing, and the end-of-request call would simply
call apr_pool_clear(). Note that this only applies to dynamically
allocated memory required for the lifetime of a request. For memory
with longer lifetimes we could make the creation and destruction
routines more generic.

-aaron



[REQ] 1.3: cygwin changes

2002-06-10 Thread Stipe Tolj

Hi Jim, Hi others,

there are still some open patches that I send in and have yet not been
commited, except the long standing src/helpers/install.sh issue.

I'm just wondering if this is to be commited for 1.3.25?

Open are patches to (send in 31 May):
 
  * src/helpers/binbuild.sh
  * src/modules/standard/Makefile.Cygwin
  * Makefile.tmpl  \
  * src/Configure   | for de-hardcoding $SHLIB_PREFIX_NAME
  * src/Makefile.tmpl  /

Regards,
Stipe

[EMAIL PROTECTED]
---
Wapme Systems AG

Vogelsanger Weg 80
40470 Düsseldorf

Tel: +49-211-74845-0
Fax: +49-211-74845-299

E-Mail: [EMAIL PROTECTED]
Internet: http://www.wapme-systems.de
---
wapme.net - wherever you are



Re: cvs commit: httpd-2.0/modules/generators mod_cgi.h

2002-06-10 Thread Greg Ames

[EMAIL PROTECTED] wrote:

>   Modified:modules/generators mod_cgi.h
>   Log:
> Once moved to a shared location, this bouncy #include dies
> 
>   Revision  ChangesPath
>   1.8   +1 -1  httpd-2.0/modules/generators/mod_cgi.h

>   @@ -59,7 +59,7 @@
>#ifndef _MOD_CGI_H
>#define _MOD_CGI_H 1
> 
>   -#include "../filters/mod_include.h"
>   +#include "mod_include.h"
> 
>typedef enum {RUN_AS_SSI, RUN_AS_CGI} prog_types;

This patch breaks the build if you enable mod_cgi[d] but exclude mod_include.

Greg

In file included from mod_cgi.c:96:
mod_cgi.h:62:25: mod_include.h: No such file or directory
make[3]: *** [mod_cgi.lo] Error 1
make[3]: Leaving directory `/home/gregames/apache/httpd-2.0/modules/generators'
make[2]: *** [all-recursive] Error 1



RE: HEAD Executes CGI on HEAD

2002-06-10 Thread Ryan Bloom

> From: Jerry Baker [mailto:[EMAIL PROTECTED]]
> 
> Is it correct for Apache to be executing includes when a HEAD request
is
> issued for a document that contains includes?

Yep.  Apache treats a HEAD request exactly like a GET request, except
that we don't return the body.  The HTTP spec states that we have to
return the same headers as we would return in a GET request, which
usually means that we need to actually run the request.

Ryan





HEAD Executes CGI on HEAD

2002-06-10 Thread Jerry Baker

Is it correct for Apache to be executing includes when a HEAD request is
issued for a document that contains includes?

-- 
Jerry Baker