digging out the missing error message

2004-05-05 Thread Geoffrey Young
hi all

a while ago Ken Coar brought up that Apache-Test doesn't print the final
test count when there are errors.  that is, we currently do this:

# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.
[warning] server localhost.localdomain:8529 shutdown

instead of this

# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/apache/contentlength.t   207  35.00%  2 6 10 14 16 18 20
Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.
[warning] server localhost.localdomain:8529 shutdown

note the absence of the Failed 1/1... stuff in what we have currently.

well, I finally figured out why.  attached is a patch that fixes the
problem.  however, as you can see the installed SIG{__DIE__} handler (from
TestRun.pm) is keeping us from simply putting Test::Harness::runtests in an
eval block.  so the fix isn't really a fix until we figure out some other
stuff, and it may not be fixable at all.

anyone with insight into the current SIG{__DIE__} foo? it's in the ToDo as
something to consider removing, but I'm not sure how easy that would be.

--Geoff
Index: lib/Apache/TestHarness.pm
===
RCS file: /home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestHarness.pm,v
retrieving revision 1.18
diff -u -r1.18 TestHarness.pm
--- lib/Apache/TestHarness.pm	4 Mar 2004 05:51:31 -	1.18
+++ lib/Apache/TestHarness.pm	5 May 2004 19:14:56 -
@@ -167,7 +167,12 @@
 $ENV{HTTPD_TEST_SUBTESTS} = @subtests;
 }
 
-Test::Harness::runtests($self-get_tests($args, @_));
+eval {
+   local $SIG{__DIE__};
+   Test::Harness::runtests($self-get_tests($args, @_));
+};
+
+print $@ if $@;
 }
 
 1;


Re: digging out the missing error message

2004-05-05 Thread Stas Bekman
Geoffrey Young wrote:
hi all
a while ago Ken Coar brought up that Apache-Test doesn't print the final
test count when there are errors.  that is, we currently do this:
# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.
[warning] server localhost.localdomain:8529 shutdown
instead of this
# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/apache/contentlength.t   207  35.00%  2 6 10 14 16 18 20
Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.
[warning] server localhost.localdomain:8529 shutdown
note the absence of the Failed 1/1... stuff in what we have currently.
Not sure what you are talking about above, the only difference between the two 
is in line:

t/apache/contentlength.t   207  35.00%  2 6 10 14 16 18 20
What is Failed 1/1... stuff?
How did you invoke the script so I can reproduce the same? 'make test'?
__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Re: digging out the missing error message

2004-05-05 Thread Geoffrey Young

 Not sure what you are talking about above, the only difference between
 the two is in line:

blarg, cut and paste error.

without my patch, it looks like this (note it's 1.3)

[EMAIL PROTECTED] perl-framework]$ t/TEST t/apache/contentlength.t -v
/apache/1.3/dso/perl-5.8.4/bin/httpd -d /src/httpd-test/perl-framework/t -f
/src/httpd-test/perl-framework/t/conf/httpd.conf -D APACHE1 -D PERL_USEITHREADS
using Apache/1.3.31-dev

# testing : response codes
# expected: HTTP/1.1 413 Request Entity Too Large
# received: HTTP/1.1 400 Bad Request
not ok 20
# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/apache/contentlength.t   207  35.00%  2 6 10 14 16 18 20
[warning] server localhost.localdomain:8529 shutdown

the problem is the lack of this line:

Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.

which is produced when Test::Harness::_show_results() dies.

--Geoff



Re: digging out the missing error message

2004-05-05 Thread Stas Bekman
Geoffrey Young wrote:
Not sure what you are talking about above, the only difference between
the two is in line:

blarg, cut and paste error.
without my patch, it looks like this (note it's 1.3)
[EMAIL PROTECTED] perl-framework]$ t/TEST t/apache/contentlength.t -v
/apache/1.3/dso/perl-5.8.4/bin/httpd -d /src/httpd-test/perl-framework/t -f
/src/httpd-test/perl-framework/t/conf/httpd.conf -D APACHE1 -D PERL_USEITHREADS
using Apache/1.3.31-dev
# testing : response codes
# expected: HTTP/1.1 413 Request Entity Too Large
# received: HTTP/1.1 400 Bad Request
not ok 20
# Failed test 20 in t/apache/contentlength.t at line 54 fail #10
FAILED tests 2, 6, 10, 14, 16, 18, 20
Failed 7/20 tests, 65.00% okay
Failed Test  Stat Wstat Total Fail  Failed  List of Failed
---
t/apache/contentlength.t   207  35.00%  2 6 10 14 16 18 20
[warning] server localhost.localdomain:8529 shutdown
the problem is the lack of this line:
Failed 1/1 test scripts, 0.00% okay. 7/20 subtests failed, 65.00% okay.
which is produced when Test::Harness::_show_results() dies.
Got it. Why not just do this:
Index: Apache-Test/lib/Apache/TestRun.pm
===
RCS file: 
/home/cvs/httpd-test/perl-framework/Apache-Test/lib/Apache/TestRun.pm,v
retrieving revision 1.166
diff -u -r1.166 TestRun.pm
--- Apache-Test/lib/Apache/TestRun.pm   16 Apr 2004 20:29:23 -  1.166
+++ Apache-Test/lib/Apache/TestRun.pm   5 May 2004 22:40:28 -
@@ -347,6 +347,7 @@
 $SIG{__DIE__} = sub {
 return unless $_[0] =~ /^Failed/i; #dont catch Test::ok failures
+print $_[0];
 $server-stop(1) if $opts-{'start-httpd'};
 $server-failed_msg(error running tests);
 exit_perl 0;
--
__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


Re: digging out the missing error message

2004-05-05 Thread Geoffrey Young

 Got it. Why not just do this:

  return unless $_[0] =~ /^Failed/i; #dont catch Test::ok failures
 +print $_[0];

truthfully, I spent far too long trying to figure out why the die() wasn't
cascading.  once I got it I just patched it and let the patch fly without
too much afterthought.

but yeah, I think that will work just fine.  is there any reason you can
think of that we wouldn't want to see the error?  or if the error should
sometimes be preceeded with '#'?

--Geoff


Re: Sample code for IPC in modules

2004-05-05 Thread Sander Temme
Hi Mark,

Thanks for your observations.

On May 4, 2004, at 7:18 PM, mark wrote:

Your attach logic should work, however it raises privilege issues 
because the children run as a different user (nobody or www, etc) the 
than the process running the create (root). I had problems when I was 
doing it that way, worked
I have to admit I hadn't tested as root, and it does exhibit permission 
problems on linux and darwin as well. Omitting the attach solves that 
problem, and I also got reacquainted with 
unixd_set_global_mutex_perms(). So, both of those are working now.

2)
Dettach is never needed. However, depending on desired results, it is 
usually desireable to perform a destroy when a HUP signal is sent, so 
that it gets created fresh by post_config

I've run into the strange errors under high load where newly forked 
children startup thinking they are attached to the inherited shm seg, 
but are in fact attached to some anonymous new segment. No error is 
produced, but obviously it's a catastrophic situation.
Yeah, that would be Bad. However, how does one hook into the SIGHUP 
handler? AFAIK, httpd has its own signal handlers that do stuff like 
restarts and graceful, and if I registered another handler, I would 
overrule the one that httpd sets. Or is there a provision for a chain 
of signal handlers?

I put the new version at http://apache.org/~sctemme/mod_example_ipc.c 
to save on e-mail bandwidth.

Thanks again,

S.

--
[EMAIL PROTECTED]  http://www.temme.net/sander/
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF


smime.p7s
Description: S/MIME cryptographic signature


Re: mod_proxy distinguish cookies?

2004-05-05 Thread TOKILEY

 Roy T. Fielding wrote:

 I do wish people would read the specification to refresh their memory
 before summarizing. RFC 2616 doesn't say anything about cookies -- it
 doesn't have to because there are already several mechanisms for marking
 a request or response as varying. In this case
 
 Vary: Cookie
 
 added to the response by the server module (the only component capable
 of knowing how the resource varies) is sufficient for caching clients
 that are compliant with HTTP/1.1.

 Graham wrote...

 My sentence "RFC2616 does not consider a request with a different cookie 
 a different variant" should have read "RFC2616 does not recognise 
 cookies specifically at all, as they are just another header". I did not 
 think of the Vary case, sorry for the confusion.
 
 Regards,
 Graham

"Vary" still won't work for the original caller's scenario.

Few people know this but Microsoft Internet Explorer and other
major browsers only PRETEND to support "Vary:".

In MSIE's case... there is only 1 value that you can use with
"Vary:" that will cause MSIE to make any attempt at all to
cache the response and/or deal with a refresh later.

That value is "User-Agent".

MSIE treats all other "Vary:" header values as if it
received "Vary: *" and will REFUSE to cache that
response at all.

This means that if you try and use "Vary:" for anything
other than "User-Agent" then the browser is going to
not cache anything (ever) and will be hammering away at
the unlucky nearest target ProxyCache and/or Content Server.

Why in the world an end-point User-Agent would only be
interested in doing a "Vary:" on its own name ( which it
already knows ) ceases to be a mystery if you read the
following link. The HACK that Microsoft added actually
originated as a problem report to the Apache Group itself
back in 1999...

URI title: Client bug: IE 4.0 breaks with "Vary" header.

http://bugs.apache.org/index.cgi/full/4118

Microsfot reacted to the problem with a simple HACK that
just looks for "User-Agent" and this fixed 4.0.

That simple hack is the only "Vary:" support MSIE really
has to this day.

The following message thread at W3C.ORG itself proves
that the "Vary:" problem still exists with MSIE 6.0 ( and other
major browsers )...

http://lists.w3.org/Archives/Public/ietf-http-wg/2002AprJun/0046.html

There is also a lengthy discussion about why "Vary:" is a 
nightmare on the client side at the mod_gzip forum.
The discussion centers on the fact that major browsers will
refuse to cache responses locally that have 
"Vary: Accept-encoding" and will end up hammering 
Content Servers but the discussion expanded when it
was discovered that most browsers won't do "Vary:" at all.

http://lists.over.net/pipermail/mod_gzip/2002-December/006838.html

As far as this fellow's 'Cookie' issue goes... there is, in fact, a 
TRICK that you can use ( for MSIE, anyway ) that 
actually works.

Just defeat the HACK with another HACK.

If a COS ( Content Origin Server ) sends out a 
"Vary: User-Agent" then most major browsers
( MSIE included ) will, in fact, cache the response
locally and will 'react' to changes in "User-Agent:"
field when it sends out an "If-Modified-Since:' 
refresh request.

If you create your own psuedo-cookies and just hide
them in the 'extra' text fields that are allowed to be
in any "User-Agent:" field then Voila... it actually WORKS!

I know that's going to send chills up Roy's spine but
it happens to actually WORK OK.

Nothing happens other than 'the right thing'.

MSIE sees a 'different' "User-Agent:" field coming
back and could actually care less WHAT the 
value is... it only knows that it's now 'different' and
so it just goes ahead and accepts a 
'fresh' response for the "Vary:".

If this fellow were to simply 'stuff' his Cookie into the
'extra text' part of the User-Agent: string and send
out a "Vary: User-Agent" along with the response
then it would actually work the way he expects it too.

Nothing else is going to solve the problem with MSIE,
I'm afraid, other than this 'HACK the HACK'.

Later...
Kevin



Re: mod_proxy distinguish cookies?

2004-05-05 Thread Igor Sysoev
On Mon, 3 May 2004, Neil Gunton wrote:

 Well, that truly sucks. If you pass options around in params then
 whenever someone follows a link posted by someone else, they will
 inherit that person's options. The only alternative might be to make
 pages 'No-Cache' and then set the 'AccelIgnoreNoCache' mod_accel
 directive (which I haven't tried, but I assume that's what it does)...
 so even though my server will get hit a lot more, at least it might be
 stopped by the proxy rather than hitting the mod_perl.

The AccelIgnoreNoCache disables a client's Pragma: no-cache,
Cache-Control: no-cache and Cache-Control: max-age=number headers.

The AccelIgnoreExpires disables a backend's Expires,
Cache-Control: no-cache and Cache-Control: max-age=number headers.


Igor Sysoev
http://sysoev.ru/en/


Re: Sample code for IPC in modules

2004-05-05 Thread Geoffrey Young

 I put the new version at http://apache.org/~sctemme/mod_example_ipc.c
 to save on e-mail bandwidth.

if you're interested in this kind of thing, I've wrapped up mod_example_ipc
in an Apache-Test tarball:

  http://perl.apache.org/~geoff/mod_example-ipc.tar.gz

for no particular reason except that I know you were at my apachecon talk on
Apache-Test but I didn't cover C module integration at all.

fwiw.

--Geoff


Re: ssl_gcache_data preventing httpd startup

2004-05-05 Thread Joe Orton
On Tue, May 04, 2004 at 09:36:14PM +0200, Graham Leggett wrote:
 I have just installed the latest published version of httpd (v2.0.49), 
 and the problem where httpd refuses to start unless the file 
 ssl_gcache_data is manually deleted beforehand is still there.
 
 I recall some recent discussion about the problem, but don't know if a 
 fix ever got into the v2.0 tree. Is this fixed yet?

Mostly: http://nagoya.apache.org/bugzilla/show_bug.cgi?id=21335

joe


Re: mod_proxy distinguish cookies?

2004-05-05 Thread Neil Gunton
[EMAIL PROTECTED] wrote:
 If this fellow were to simply 'stuff' his Cookie into the
 'extra text' part of the User-Agent: string and send
 out a Vary: User-Agent along with the response
 then it would actually work the way he expects it too.

Thanks to Roy and Kevin for your insight. Sorry if this thread is
perhaps a bit off-topic for this list, but I hope you can indulge me
just a little longer. When I saw Roy's response regarding the 'Vary'
header, I thought that this would be exactly what I was after - you
could set 'Vary: Cookie' and then the browser would see that it should
reget the page if the cookie has changed. But this didn't seem to work
at all in practice. I am testing with the following sequence:

1. Get a page, which has Cache-Control and Expires headers set so that
it will be cached
2. Go to another page, where I use a form to change the option cookie
3. The options form sets the cookie and redirects the browser back to
the original page
4. The original page is displayed, not new version - browser doesn't
revalidate.

I have set all the headers, this is an example:

shell HEAD http://dev.crazyguyonabike.com
200 OK
Cache-Control: must-revalidate; s-maxage=900; max-age=901
Connection: close
Date: Wed, 05 May 2004 16:08:34 GMT
Server: Apache
Vary: Cookie
Content-Length: 7020
Content-Type: text/html
Expires: Wed, 05 May 2004 16:23:35 GMT
Last-Modified: Wed, 05 May 2004 16:08:34 GMT
Client-Date: Wed, 05 May 2004 16:08:35 GMT
Client-Response-Num: 1
MSSmartTagsPreventParsing: TRUE

So I am setting the Cache-Control to cache the page, and the client is
directed to revalidate. I say in the Vary header that Cookie header must
be taken into account. But the browser simply fails to revalidate the
original page at all. If I manually refresh then it gets the correct
version, but I can't control manual refreshes (or user options) on the
browser end. I would simply love to be able to hit that sweet spot
where the browser caches the page, but also sees that some magic
component has changed and thus the old version of the page in the cache
cannot be used any more.

When I saw Kevin's response, it made perfect sense at first, because
what he describes is exactly what I experienced above. Neither Mozilla
1.4 or IE 6 appear to take any notice of the 'Vary: Cookie' header. I
decided to try Kevin's suggestion re the User-Agent field, but after
looking at this further I am very confused. The User-Agent field is
something that is passed in *from* the client, not *to* the client from
a server. Why would IE or any other client even look at a User-Agent
field? Ok, ok, I understand, the whole point is that this is a hack,
but even so it doesn't seem to work for me. I tried setting the
User-Agent field:

shell HEAD http://dev.crazyguyonabike.com
200 OK
Cache-Control: must-revalidate; s-maxage=900; max-age=901
Connection: close
Date: Wed, 05 May 2004 16:08:34 GMT
User-Agent: Mozilla/4.0 (compatible; opts=300)
Server: Apache
Vary: User-Agent
Content-Length: 7020
Content-Type: text/html
Expires: Wed, 05 May 2004 16:23:35 GMT
Last-Modified: Wed, 05 May 2004 16:08:34 GMT
Client-Date: Wed, 05 May 2004 16:08:35 GMT
Client-Response-Num: 1
MSSmartTagsPreventParsing: TRUE

As you can see, I've encoded the opts cookie into the User-Agent header.
Am I doing this right? Nothing appears to change, indeed now IE doesn't
even get the proper version when I hit 'Refresh'. Maybe I'm being dense
and didn't read the instructions correctly, but it seemed like this was
what was being suggested.

Once again, I apologize if this is overly obvious or off-topic, but I
have the feeling that I'm just missing something obvious here. Any
insight would be much appreciated. In summary, the problem currently
appears to be that neither Mozilla or IE appears to even want to
revalidate the original page after the cookie has changed. When the
browser is redirected back to the original page (using identical URL)
from the options form, both browsers just use their cached version,
without even touching the server at all. No request, nothing. When I use
the 'Vary: Cookie' header, then manually refreshing does get the new
version. I know that browser settings can determine how often the
browser revalidates the page, but I can't tell random users on the
internet to change their settings for my site. I would have thought that
it should be possible for a page to be cached, and yet still be
invalidated by the cookie (or, in the general case, some 'Vary' header)
changing.

Anyway, thanks again...

-Neil


Re: cvs commit: httpd-2.0 STATUS

2004-05-05 Thread Jeff Trawick
[EMAIL PROTECTED] wrote:
jorton  2004/05/05 09:29:59

  Index: STATUS

   *) Readd suexec setuid and user check (now APR supports it)
os/unix/unixd.c: r1.69
  +1: nd, trawick
  +   +1: jorton, if surrounded with #ifdef APR_USETID to retain
  +   compatibility with APR 0.9.5
why the compatibility restriction?



Re: cvs commit: httpd-2.0 STATUS

2004-05-05 Thread Joe Orton
On Wed, May 05, 2004 at 03:05:45PM -0400, Jeff Trawick wrote:
 [EMAIL PROTECTED] wrote:
 jorton  2004/05/05 09:29:59
 
   Index: STATUS
 
*) Readd suexec setuid and user check (now APR supports it)
 os/unix/unixd.c: r1.69
   +1: nd, trawick
   +   +1: jorton, if surrounded with #ifdef APR_USETID to retain
   +   compatibility with APR 0.9.5
 
 why the compatibility restriction?

APR 0.9.4 is the latest released version of APR, and it's desirable that
the 2.0 branch is always usable with a released version of APR.

joe


Re: mod_proxy distinguish cookies?

2004-05-05 Thread TOKILEY

Hi Neil...
This is Kevin Kiley...

Personally, I don't think this discussion is all that OT for
Apache but others might disagree.

"Vary:" is still a broken mess out there and if 'getting it right'
is still anyone's goal then these are the kinds of discussions
that need to take place SOMEWHERE. Apache is not the
W3C but it's about as close as you can get.

I haven't looked at this whole thing for a LOOONG time so
I had to go back and check my notes regarding the 
MSIE 'User-Agent' trick.

As absurd as it sounds... you actually got the point.

"User-Agent:' IS, in fact, supposed to be a 'request-side'
header but when it comes to "Vary:"... the world can
turn upside down and what doesn't seem to make any
sense can actually WORK.

Unfortuneately... I can't find the (old) notes I had about
exactly what I did to make the "Vary: User-Agent" trick
actually work with MSIE. I was just mucking around and
never had any intention of implementing this as a solution
for anything but I DO remember somehow making it WORK
( almost ) just the way you are doing it.

If I have some time... I'll try to find those notes and the
test code I know I had somewhere that WORKED.

Another fellow who just responded pointed out that
"Content-encoding:'" seems to be another field that
MSIE will actually react to when it comes to VARY.

Well... it had been so long since I mucked with all
this I had to go back and find/read some notes.

The fellow who posted is SORT OF right about
"Content-Encoding:" LOOKING like it can "Vary:"
but it's not really "Vary:" at work at all.

The REALITY is explained in that link I already
supplied in last message...

http://lists.over.net/pipermail/mod_gzip/2002-December/006838.html

Unless there has been some major change or patch to MSIE 6.0
and above then I still stand by my original research/statement...

MSIE will treat ANY field name OTHER than "User-Agent"
that arrives with a "Vary:" header on a non-compressed
response as if it had received
"Vary: *" ( Vary: STAR ) and it will NOT CACHE that response
locally. Every reference to page ( Via Refresh, Back-button, 
local hyperlink-jump, whatever ) will cause MSIE to go all
the way upstream for a new copy of the page EVERY TIME.

Maybe this is really what you want? Dunno.

The reason it also LOOKS like "Content-Encoding" is 
being accepted as a VARY and MSIE is sending out
an 'If-Modified-Since:' on those pages is NOT because
it is doing "Vary:"... it's for other strange reasons.

Whenever MSIE receives a compressed response
( Content-encoding: gzip ) then it will ALWAYS
cache that response... even if it has been specifically
told to NEVER do that ( no-cache, Expires: -1 , whatever ).

It HAS to. MSIE ( and Netsape ) MUST use the CACHE FILE
to DECOMPRESS the response... and it always KEEPS
it around.

Neither MSIE or Netscape nor Opera are able to 'decompress'
in memory. They all MUST have a cache file to work from
even if they are not supposed to EVER cache that 
particular response. They just do it anyway.

So... to make a long story short... MSIE will always 
decide it MUST cache a response with any kind of
"Content-Encoding:" on it and it will set the cache 
flags for that puppy to 'always-revalidate' and that's
where the "If-Modified-Since:" output is coming from
which makes it LOOKS like "Vary:" is involved...
but it is NOT.

However... in the world of "Vary:" you run into this snafu
whereby you can't differentiate between what you are
trying to tell an inline Proxy Cache 'what to do' versus
an end-point user-aget.

Example: If you are a COS ( Content Origin Server ) and
you want a downstream Proxy Cache to 'Vary' the 
( non-expired ) response it might give out according to
whether a requestor says it can handle compression
or not ( Accept-encoding: gzip, deflate ) then the right
VARY header to add to the response(s) is

"Vary: Accept-Encoding"

and not 

"Vary: Content-Encoding".

The "Content-Encoding" only comes FROM the Server.
The 'decision' you want the Proxy Cache to make can
only be based on whether a requestor has sent
"Accept-Encoding: gzip, deflate" ( or not ).

If there is no inline Proxy ( which is always impossible to tell )
and response is direct to browser then the same "Vary:"
header that would 'do the right thing' for a Proxy Cache
is meaningless for the end-point user-agent itself.

The User-Agent never 'varies' it's own 'Accept-Encoding:'
output header ( unless you are using Opera and clicking
all those 'imitate other browser' options in-between requests
for the same resource ).

One of the biggest mis-conceptions out there is that browsers
are somehow REQUIRED to obey all the RFC standard 
caching rules as if they were HTTP/x.x compliant Proxy
Caches.

They are NOT. The RFC's themselves say that end-point
user agents can be 'implementation specific' when it comes
to caching and should not be considered true "Proxy Caches".

Most major browsers DO 'follow the rules' ( sort of ) but 
none of them could be considered true HTTP 

Re: Sample code for IPC in modules

2004-05-05 Thread Mark Wolgemuth
(see note on hup cleanup below)
On May 5, 2004, at 2:51 AM, Sander Temme wrote:
Hi Mark,

Thanks for your observations.

On May 4, 2004, at 7:18 PM, mark wrote:

2)
Dettach is never needed. However, depending on desired results, it is 
usually desireable to perform a destroy when a HUP signal is sent, so 
that it gets created fresh by post_config

I've run into the strange errors under high load where newly forked 
children startup thinking they are attached to the inherited shm seg, 
but are in fact attached to some anonymous new segment. No error is 
produced, but obviously it's a catastrophic situation.
Yeah, that would be Bad. However, how does one hook into the SIGHUP 
handler? AFAIK, httpd has its own signal handlers that do stuff like 
restarts and graceful, and if I registered another handler, I would 
overrule the one that httpd sets. Or is there a provision for a chain 
of signal handlers?

You don't really need to worry about the SIGHUP handler, just tie a 
cleanup function to the pool used to create
it in post_config. This will be the process pool of the parent, and its 
cleanups get run after all the children exit on a restart. It works for 
me.

static apr_status_t
shm_cleanup_wrapper(void *unused) {
int rv;
if (shm_seg)
rv = apr_shm_destroy(shm_seg);
else
rv = APR_EGENERAL;
return rv;
}
then in post_config:

apr_pool_cleanup_register(pool, NULL, shm_cleanup_wrapper, 
apr_pool_cleanup_null);

... where pool is the first parameter to post_config (and used to 
create shmseg);



I put the new version at 
http://apache.org/~sctemme/mod_example_ipc.c to save on e-mail 
bandwidth.

Thanks again,

S.

--
[EMAIL PROTECTED]  http://www.temme.net/sander/
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF



Re: mod_proxy distinguish cookies?

2004-05-05 Thread Neil Gunton
[EMAIL PROTECTED] wrote:
 Bottom line:
 
 In order to do your 'Cookie' scheme and have it work with
 all major browsers you might have to give up on the idea
 that the responses can EVER be 'cached' locally by
 a browser... but now you also lose the ability to have
 it cached by ANYONE.
 
 There is no HTTP caching control directive that says...
 
 Cache-Control: no-cache-only-if-endpoint-user-agent
 
 Given the caching issues in most 'end-point' browsers...
 There probably should be such a directive.
 
 The ONLY guy you don't want to cache it is the
 end-point browser itself... but you DO want the
 response available from other nearby caches so
 your Content Origin Server doesn't get hammered
 to death.

Thanks again Kevin for the insight and interesting links. It seems to me
that there are basically three components here: My server, intermediate
caching proxies, and the end-user browser. From my understanding of the
discussion so far, each of these can be covered as follows:

1. My server: Cookies can be understood (i.e. queries are
differentiated) by my server's reverse proxy cache.

2. Intermediate caching proxies: I can use the 'Vary: Cookie' header to
tell any intermediate caches that cookies differentiate requests.

3. Browsers: Pass the option cookie around as part of the URL param list
(relatively easy to do using HTML::Embperl or other template solution).
So if the cookie is opts=123, then I make every link on my site be of
the form /somedir/example.html?opts=123 This makes the page look
different to the browser when the cookie is changed, so the browser will
have to get the new version of the page. I don't actually use the URL
param on the backend, only the cookie version of the value is used. The
URL param is simply there to make the URL look different to the browser.
Thus if someone posts a link to my website with opt=123 in the query
string, and then someone with cookie opt=456 clicks on that link, they
should successfully get the right version of the page.

I think all this allows me to have pages be cached, while also allowing
cookies to be used to store options. This does assume that any real
proxy caches in the middle obey the Vary: Cookie header. If they get a
request for a page in their cache from a browser with a different cookie
to that associated with the cache entry, then presumably the cache is
required to not use the cache entry and pass it through to the origin
server.

This obviously isn't ideal, but it attempts to address the world as it
seems to be today.

If anyone sees any potential problems with this sort of setup, then let
me know...

Thanks again, this has been a very enlightening discussion.

-Neil


Re: mod_proxy distinguish cookies?

2004-05-05 Thread TOKILEY

 Neil wrote...

 Thanks again Kevin for the insight and interesting links. It seems to me
 that there are basically three components here: My server, intermediate
 caching proxies, and the end-user browser. From my understanding of the
 discussion so far, each of these can be covered as follows:

 1. My server: Cookies can be understood (i.e. queries are
 differentiated) by my server's reverse proxy cache.

Sure... but only if you are receiving all the requests WHEN
and AS OFTEN as you need to. ( User-Agents coming back
for pages when they are supposed to )...

 2. Intermediate caching proxies: I can use the 'Vary: Cookie' header to
 tell any intermediate caches that cookies differentiate requests.

Nope. Scratch the word 'any' and substitute 'some'.

There are very few 'Intermediate caching proxies' that are able to
'do the right thing' when it comes to 'Vary:'.

MOST Proxy Cache Servers ( including ones that SAY they are
HTTP/1.1 compliant ) do NOT handle Vary: and they will simple
treat ANY response they get with a "Vary:" header of any kind
exactly the way MSIE seems to. They will treat it as if it was
"Vary: *" ( Vary: STAR ) and will REFUSE to cache it at all.

Might as well just use 'Cache-Control: no-cache'. It will be the
same behavior for caches that don't support "Vary:".

SQUID is the ONLY caching proxy I know of that even comes
close to handling "Vary:" correctly but only the latest version(s).

For years now... even SQUID would just 'punt' any response
that had any kind of "Vary:" header at all. It would default
all "Vary: xx" headers to "Vary: *" ( Vary: STAR ) and
never bother to cache them at all.

Even the latest version
of SQUID is still not HTTP/1.1 compliant. There is still a lot
of 'Etag:' things that don't get handled correctly.

It's possible to implement "Vary:" without doing full "Etag:"
support as well but there will always be times when the 
response is not cacheable unless full "Etag:" support
is onboard.

So you CAN/SHOULD use the "Vary: Cookie" response
header and it WILL work for SOME inline caches... but
be fully prepared for users to report problems when the
inline cache is paying no attention to your "Vary:".

 3. Browsers: Pass the option cookie around as part of the URL param list
 (relatively easy to do using HTML::Embperl or other template solution).
 So if the cookie is "opts=123", then I make every link on my site be of
 the form "/somedir/example.html?opts=123...". This makes the page look
 different to the browser when the cookie is changed, so the browser will
 have to get the new version of the page. 

Not sure. Maybe.

I guess I really don't follow what the heck you are trying to do here.

What do you mean by 'make every link on my site be of the form uri?'

Don't you mean you want everyone USING your site to be sending
these varius 'cookie' deals so you can tell who is who and something
just steps in and makes sure they get the right response?

You should not have to 'make every link on my site' be anything.
Something else should be sorting all the requests out.

I guess I just don't get what it is you are trying to do that falls
outside the boundaries of normal CGI and 'standard practice'.

AFAIK 'shopping carts' had this all figured out years ago.

Now... if what you meant was that every time you send a PAGE
down to someone with a particular cookie ( Real Cookie:, not
URI PARMS one ) and you re-write all the clickable 'href' links
in THAT DOCUMENT to have the 'other URI cookie' then yea
I guess that will work. That should force any 'clicks' on that
page to come back to you so that YOU can decide where
they go or if that Cookie needs to change.

But that would mean rewriting every page on the way out the door.

Surely there must be an easier way to do whatever it is you
are trying to do.

Officially... the fact that you will be using QUERY PARMS at
all times SHOULD take you out of the 'caching' ball game
altogether since the mere presence of QUERY PARMS in
a URI is SUPPOSED to make it ineligible for caching at
any point in the delivery chain.

In other words... might as well use 'Cache-Control: no-cache'
and just force everybody to come back all the time.

 ...This makes the the page look
 different to the browser when the cookie is changed, so the browser will
 have to get the new version of the page. 

Again.. I am not sure I would say 'have to'.

There is no 'have to' when it comes to what a User-Agent may or
may not be doing with cached files. Most of them follow the rules
but many do not.

I think you might be a little confused about what is actually going
on down at the browser level.

Just because someone hits a 'Forward' or a 'Back' button on some
GUI menu doesn't mean the HTTP freshness ( schemes ) always
come into play. All you are asking the browser to do is jump 
between pages it has stored locally and that local cache is
not actually required to be HTTP/1.1 compliant. Usually is NOT.

Only the REFRESH button ( or CTRL-R ) can FORCE 

Need Help Debugging Shared Library (libaprutil-0.so)

2004-05-05 Thread Steve Waltner
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=21719

Since my submitted bug hasn't been resolved in the 9 months since I 
first reported it, I figure it's about time I try and resolve this 
problem myself since I do have the source code. I've done a partial 
debug on the failure but can't get everything figured out since I can't 
get DDD/gdb to debug some of the code (coming from apr_ldap_url.c).

I'm currently using the 2.0.49 source tree for my testing. The problem 
starts in mod_auth_ldap.c. When I load the source in ddd, I get an 
error stating:

Line 1 of mod_auth_ldap.c is at address 0x2ebd4 
derive_codepage_from_lang but contains no code.

This doesn't seem to be a fatal error since I can go in and set 
breakpoints in the file. I set a breakpoint at line 702 and start the 
program. At this point, url looks good since it contains the 
AuthLDAPURL entry from my config file. I continue on to line 703 and 
urld contains bogus data. The host and filter parameters are swapped, 
the lud_attrs points off into oblivion (causing the segfault in the 
apr_pstrdup call on line 755, which is the final death for the 
process).

The problem comes in the fact that I can't seem to trace anything 
inside apr_ldap_url.c, which is where the real problems seem to lie. 
When I load this source file, gdb spits out the error:

Line number 1 is out of range for apr_ldap_url.c.

The function ldap_url_parse_ext() is not processing URLs properly on 
Solaris-SPARC, but does work fine on Linux-x86 (endian-ness error?). 
When I try to set a breakpoint in apr_ldap_url.c, I get:

No line 255 in file apr_ldap_url.c.

I believe this is due to the fact that this is coming in from a shared 
library instead of statically linked libraries and have tried to get 
around this by linking httpd statically, but that seems to be a royal 
pain to accomplish. Even when I add the --enable-static-htpasswd 
argument to configure, it comes up with a dynamically linked 
executable. Is there any way to get httpd to be statically linked so I 
can do source level debugging on ldap_url_parse_ext(). I could just 
read through the ~300 lines of code, but it would be much easier to 
find the problem by looking at all the variables as the function is 
running to see where the problems are.

Steve



Re: mod_proxy distinguish cookies?

2004-05-05 Thread Neil Gunton
[EMAIL PROTECTED] wrote:
 MOST Proxy Cache Servers ( including ones that SAY they are
 HTTP/1.1 compliant ) do NOT handle Vary: and they will simple
 treat ANY response they get with a Vary: header of any kind
 exactly the way MSIE seems to. They will treat it as if it was
 Vary: *  ( Vary: STAR ) and will REFUSE to cache it at all.

That's fine with me... I am mainly concerned with the browser and my
server. I know the browser will cache stuff when I want it to, and so
will my own reverse proxy. If intermediate caches choose not to then I
don't think it will have a huge effect on my server.

 I guess I really don't follow what the heck you are trying to do here.
 
 What do you mean by 'make every link on my site be of the form
 uri?'

Check out the site in question, http://www.crazyguyonabike.com/ for an
example of what I'm talking about. The code on this site may change in
the next couple of days, as I move over to the new way of doing things
(outlined in the previous email), but it does currently have the
pics=xxx on all URL's on the site. I achieve this by having global
Perl routines for writing all links in all the pages. This is done in
HTML::Embperl templates - every page on the site is a template. This is
the way that you can pass options around the site without using cookies.
The flaw is as I mentioned previously, if someone posts a link
somewhere, then that link will inevitably have the poster's options
embedded in the URL. So anyone who clicks on that link will get their
own options overwritten with the new link. This does work just fine
currently, has for a while now in fact.

 I guess I just don't get what it is you are trying to do that falls
 outside the boundaries of normal CGI and 'standard practice'.

What I do currently falls well within normal CGI conventions and
'standard practice', afaik. I have also tested this with the major
browsers (at least IE and Mozilla) and it works just fine, with the
browser caching requests correctly according to the Cache-Control and
Expires headers, and also distinguishing requests based on the URL.
Perhaps this is just by coincidence and isn't the way the standards are
supposed to work, but then again I think it's probable that things in
the HTTP world are so entrenched at this point that if they changed the
way all this works, it would just break too many sites. So it'll
probably stay like this for the foreseeable future, if previous
experience of inertia is anything to go by...

 AFAIK 'shopping carts' had this all figured out years ago.
 
 Now... if what you meant was that every time you send a PAGE
 down to someone with a particular cookie ( Real Cookie:, not
 URI PARMS one ) and you re-write all the clickable 'href' links
 in THAT DOCUMENT to have the 'other URI cookie' then yea
 I guess that will work. That should force any 'clicks' on that
 page to come back to you so that YOU can decide where
 they go or if that Cookie needs to change.
 
 But that would mean rewriting every page on the way out the door.
 
 Surely there must be an easier way to do whatever it is you
 are trying to do.

Using template tool like HTML::Embperl, this is really not all that big
a deal. Every single page on my site is a template, some with HTML and
Perl code, some pure Perl modules. It may offend some purists, but I've
been developing this site for over three years now and it works well for
me.

 Officially... the fact that you will be using QUERY PARMS at
 all times SHOULD take you out of the 'caching' ball game
 altogether since the mere presence of QUERY PARMS in
 a URI is SUPPOSED to make it ineligible for caching at
 any point in the delivery chain.

Is this true, or is it just something that the early proxies did because
of assumptions about CGI scripts being always dynamic and therefore not
cacheable? I think I read that somewhere (or maybe it was a comment
about URLs with 'cgi-bin'), and anyway as I said earlier, these requests
seem to be cached correctly by mod_proxy, mod_accel and the browsers, as
long as the correct Expires and Cache-Control headers are present. I
found that Last-Modified had to be present as well for mod_proxy to
cache, I seem to recall. But anyway, it does work.

 In other words... might as well use 'Cache-Control: no-cache'
 and just force everybody to come back all the time.

I don't think this is necessarily true, just from my own testing.

 Just because someone hits a 'Forward' or a 'Back' button on some
 GUI menu doesn't mean the HTTP freshness ( schemes ) always
 come into play. All you are asking the browser to do is jump
 between pages it has stored locally and that local cache is
 not actually required to be HTTP/1.1 compliant. Usually is NOT.
 
 Only the REFRESH button ( or CTRL-R ) can FORCE some browsers
 to 're-validate' a page. Simple local button navigations and
 re-displays
 from a local history list do not necessarily FORCE the browser to
 do anything at all 'out on the wire'.
 
 My own local Doppler Radar page is 

Re: Need Help Debugging Shared Library (libaprutil-0.so)

2004-05-05 Thread Stas Bekman
Steve Waltner wrote:
http://nagoya.apache.org/bugzilla/show_bug.cgi?id=21719

Since my submitted bug hasn't been resolved in the 9 months since I 
first reported it, I figure it's about time I try and resolve this 
problem myself since I do have the source code. I've done a partial 
debug on the failure but can't get everything figured out since I can't 
get DDD/gdb to debug some of the code (coming from apr_ldap_url.c).

I'm currently using the 2.0.49 source tree for my testing. The problem 
starts in mod_auth_ldap.c. When I load the source in ddd, I get an error 
stating:

Line 1 of mod_auth_ldap.c is at address 0x2ebd4 
derive_codepage_from_lang but contains no code.
You need two things:

1) compile with debug symbols retained which you get when building apache with 
--enable-maintainer-mode

2) make sure to load the library from gdb (or DDD's gdb console):

gdb sharedlib apr

or whichever lib it is.

You may find some useful notes here:
http://perl.apache.org/docs/2.0/devel/debug/c.html
There are for debugging mod_perl 2.0, but most of it applies to any other 
shared C library.

__
Stas BekmanJAm_pH -- Just Another mod_perl Hacker
http://stason.org/ mod_perl Guide --- http://perl.apache.org
mailto:[EMAIL PROTECTED] http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


[STATUS] (apache-1.3) Wed May 5 23:45:07 EDT 2004

2004-05-05 Thread Rodent of Unusual Size
APACHE 1.3 STATUS:  -*-text-*-
  Last modified at [$Date: 2004/04/19 18:53:57 $]

Release:

   1.3.31-dev: In development. Plan to TR week of April 19.
   1.3.30: Tagged April 9, 2004. Not released.
   1.3.29: Tagged October 24, 2003. Announced Oct 29, 2003.
   1.3.28: Tagged July 16, 2003. Announced ??
   1.3.27: Tagged September 30, 2002. Announced Oct 3, 2002.
   1.3.26: Tagged June 18, 2002.
   1.3.25: Tagged June 17, 2002. Not released.
   1.3.24: Tagged Mar 21, 2002. Announced Mar 22, 2002.
   1.3.23: Tagged Jan 21, 2002.
   1.3.22: Tagged Oct 8, 2001.  Announced Oct 12, 2001.
   1.3.21: Not released.
 (Pulled for htdocs/manual config mismatch. t/r Oct 5, 2001)
   1.3.20: Tagged and rolled May 15, 2001. Announced May 21, 2001.
   1.3.19: Tagged and rolled Feb 26, 2001. Announced Mar 01, 2001.
   1.3.18: Tagged and rolled Not released.
 (Pulled because of an incorrect unescaping fix. t/r Feb 19, 2001)
   1.3.17: Tagged and rolled Jan 26, 2001. Announced Jan 29, 2001.
   1.3.16: Not released.
 (Pulled because of vhosting bug. t/r Jan 20, 2001)
   1.3.15: Not released.
 (Pulled due to CVS dumping core during the tagging when it
  reached src/os/win32/)
   1.3.14: Tagged and Rolled Oct 10, 2000.  Released/announced on the 13th.
   1.3.13: Not released.
 (Pulled in the first minutes due to a Netware build bug)
   1.3.12: Tagged and rolled Feb. 23, 2000. Released/announced on the 25th.
   1.3.11: Tagged and rolled Jan. 19, 2000. Released/announced on the 21st.
   1.3.10: Not released.
 (Pulled at last minute due to a build bug in the MPE port)
1.3.9: Tagged and rolled on Aug. 16, 1999. Released and announced on 19th.
1.3.8: Not released.
1.3.7: Not released.
1.3.6: Tagged and rolled on Mar. 22, 1999. Released and announced on 24th.
1.3.5: Not released.
1.3.4: Tagged and rolled on Jan. 9, 1999.  Released on 11th, announced on 12th.
1.3.3: Tagged and rolled on Oct. 7, 1998.  Released on 9th, announced on 10th.
1.3.2: Tagged and rolled on Sep. 21, 1998. Announced and released on 23rd.
1.3.1: Tagged and rolled on July 19, 1998. Announced and released.
1.3.0: Tagged and rolled on June 1, 1998.  Announced and released on the 6th.
   
2.0  : Available for general use, see httpd-2.0 repository

RELEASE SHOWSTOPPERS:

   * mod_digest/nonce issue.
  Message-Id: [EMAIL PROTECTED]
  Patches: Already committed to CVS for ease of review and test.
   Developers should review these!!

RELEASE NON-SHOWSTOPPERS BUT WOULD BE REAL NICE TO WRAP THESE UP:

   *  PR: 27023 Cookie could not delivered if the cookie made before
 proxy module.

   * isn't ap_die() broken with recognizing recursive errors
   Message-Id: [EMAIL PROTECTED]
+1: jeff, jim

   * Current vote on 3 PRs for inclusion:
  Bugz #17877 (passing chunked encoding thru proxy)
  (still checking if RFC compliant... vote is on the
   correctness of the patch code only).
+1: jim, chuck, minfrin
  Bugz #9181 (Unable to set headers on non-2XX responses)
+1: Martin, Jim
  Gnats #10246 (Add ProxyConnAllow directive)
+0: Martin (or rather -.5, see dev@ Message
[EMAIL PROTECTED])

* htpasswd.c and htdigest.c use tmpnam()... consider using
  mkstemp() when available.
Message-ID: [EMAIL PROTECTED]
Status:

* Dean's unescaping hell (unescaping the various URI components
  at the right time and place, esp. unescaping the host name).
Message-ID: [EMAIL PROTECTED]
Status:

* Martin observed a core dump because a ipaddr_chain struct contains
  a NULL-server pointer when being dereferenced by invoking httpd -S.
Message-ID: [EMAIL PROTECTED]
Status: Workaround enabled. Clean solution can come after 1.3.19

* long pathnames with many components and no AllowOverride None
  Workaround is to define Directory / with AllowOverride None,
  which is something all sites should do in any case.
Status: Marc was looking at it.  (Will asks 'wasn't this patched?')

* Ronald Tschalär's patch to mod_proxy to allow other modules to
  set headers too (needed by mod_auth_digest)
Message-ID: [EMAIL PROTECTED]
Status:


Available Patches (Most likely, will be ported to 2.0 as appropriate):

   *  A rewrite of ap_unparse_uri_components() by Jeffrey W. Baker
 [EMAIL PROTECTED] to more fully close some segfault potential.
Message-ID: [EMAIL PROTECTED]
Status:  Jim +1 (for 1.3.19), Martin +0

* Andrew Ford's patch (1999/12/05) to add absolute times to mod_expires
Message-ID: [EMAIL PROTECTED]
Status: Martin +1, Jim +1, Ken +1 (on concept)

* Raymond S Brand's path to mod_autoindex to fix the header/readme
  include processing so the envariables are correct for 

[STATUS] (httpd-2.0) Wed May 5 23:45:14 EDT 2004

2004-05-05 Thread Rodent of Unusual Size
APACHE 2.0 STATUS:  -*-text-*-
Last modified at [$Date: 2004/05/05 16:29:58 $]

Release:

2.0.50  : in development
2.0.49  : released March 19, 2004 as GA.
2.0.48  : released October 29, 2003 as GA.
2.0.47  : released July 09, 2003 as GA.
2.0.46  : released May 28, 2003 as GA.
2.0.45  : released April 1, 2003 as GA.
2.0.44  : released January 20, 2003 as GA.
2.0.43  : released October 3, 2002 as GA.
2.0.42  : released September 24, 2002 as GA.
2.0.41  : rolled September 16, 2002.  not released.
2.0.40  : released August 9, 2002 as GA.
2.0.39  : released June 17, 2002 as GA.
2.0.38  : rolled June 16, 2002.  not released.
2.0.37  : rolled June 11, 2002.  not released.
2.0.36  : released May 6, 2002 as GA.
2.0.35  : released April 5, 2002 as GA.
2.0.34  : tagged March 26, 2002.
2.0.33  : tagged March 6, 2002.  not released.
2.0.32  : released Feburary 16, 2002 as beta.
2.0.31  : rolled Feburary 1, 2002.  not released.
2.0.30  : tagged January 8, 2002.  not rolled.
2.0.29  : tagged November 27, 2001.  not rolled.
2.0.28  : released November 13, 2001 as beta.
2.0.27  : rolled November 6, 2001
2.0.26  : tagged October 16, 2001.  not rolled.
2.0.25  : rolled August 29, 2001
2.0.24  : rolled August 18, 2001
2.0.23  : rolled August 9, 2001
2.0.22  : rolled July 29, 2001
2.0.21  : rolled July 20, 2001
2.0.20  : rolled July 8, 2001
2.0.19  : rolled June 27, 2001
2.0.18  : rolled May 18, 2001
2.0.17  : rolled April 17, 2001
2.0.16  : rolled April 4, 2001
2.0.15  : rolled March 21, 2001
2.0.14  : rolled March 7, 2001
2.0a9   : released December 12, 2000
2.0a8   : released November 20, 2000
2.0a7   : released October 8, 2000
2.0a6   : released August 18, 2000
2.0a5   : released August 4, 2000
2.0a4   : released June 7, 2000
2.0a3   : released April 28, 2000
2.0a2   : released March 31, 2000
2.0a1   : released March 10, 2000

Please consult the following STATUS files for information
on related projects:

* srclib/apr/STATUS
* srclib/apr-util/STATUS
* docs/STATUS

Contributors looking for a mission:

* Just do an egrep on TODO or XXX in the source.

* Review the PatchAvailable bugs in the bug database.
  Append a comment saying Reviewed and tested.

* Open bugs in the bug database.

RELEASE SHOWSTOPPERS:

PATCHES TO BACKPORT FROM 2.1
  [ please place file names and revisions from HEAD here, so it is easy to
identify exactly what the proposed changes are! ]

*) mod_cgi: Handle stderr output during script execution
   
http://cvs.apache.org/viewcvs.cgi/httpd-2.0/modules/generators/mod_cgi.c?r1=1.160r2=1.163
   PR: 22030, 18348
   +1: jorton

*) Readd suexec setuid and user check (now APR supports it)
 os/unix/unixd.c: r1.69
   +1: nd, trawick
   +1: jorton, if surrounded with #ifdef APR_USETID to retain
   compatibility with APR 0.9.5

*) Prevent Win32 pool corruption at startup
 server/mpm/winnt/child.c: r1.36 
   +1: ake, trawick

*) mod_log_forensic: Fix build on systems without unistd.h. PR 28572
 modules/loggers/mod_log_forensic.c: r1.19
   +1: nd, trawick

*) mod_actions: Regression from 1.3: the file referred to must exist.
   Solve this by introducing the virtual modifier to the Action
   directive. PR 28553.
 modules/mappers/mod_actions.c: r1.32, r1.34
   +1: nd

*) htpasswd should not refuse to process files containing empty lines.
 support/htpasswd.c: r1.76
   +1: nd, trawick

*) Disable AcceptEx on Win9x systems automatically. (broken in 2.0.49)
   PR 28529
 server/mpm/winnt/mpm_winnt.c: 1.311
   +1: nd, trawick

*) export ap_set_sub_req_protocol and ap_finalize_sub_req_protocol on Win32.
   (should be a minor MMN bump). PR 28523.
 server/protocol.c: r1.147
 include/http_protocol.h: r1.91
   +1: nd, trawick

*) allow symlinks on directories to be processed by Include directives
   and stop possible recursion by a counter. PR 28492
 server/config.c: r1.175
   +1: nd

*) detect Include directive recursion by counting the nesting level.
   PR 28370.
 server/core.c: r1.275
   +1: nd

*) mod_headers: Backport ErrorHeader directive (regression from 1.3)
 modules/metadata/mod_headers.c: r1.44, 1.45, 1.51
   +1: nd, trawick

*) mod_headers: Allow conditional RequestHeader directives. PR 27951
 modules/metadata/mod_headers.c: r1.52
   +1: nd, trawick

*) Allow URLs for ServerAdmin. PR 28174.
 server/core.c: r1.274
   +1: nd, bnicholes

*) mod_rewrite: Fix confused map cache (with maps in different VHs using
   the same name). PR 26462. (2.0 + 1.3)
   A patch for 1.3 is here (2.0 goes similar):
   

[STATUS] (httpd-2.1) Wed May 5 23:45:20 EDT 2004

2004-05-05 Thread Rodent of Unusual Size
APACHE 2.1 STATUS:  -*-text-*-
Last modified at [$Date: 2004/04/27 22:09:17 $]

Release [NOTE that only Alpha/Beta releases occur in 2.1 development]:

2.1.0   : in development

Please consult the following STATUS files for information
on related projects:

* srclib/apr/STATUS
* srclib/apr-util/STATUS
* docs/STATUS

Contributors looking for a mission:

* Just do an egrep on TODO or XXX in the source.

* Review the PatchAvailable bugs in the bug database.
  Append a comment saying Reviewed and tested.

* Open bugs in the bug database.

CURRENT RELEASE NOTES:

* When the CVS-SVN is done, there's a bogus avendor branch that should be
  removed from most files.  The branch was created 4/27/2004.  It's safest
  (and easiest) for now just to leave it in there; the MAIN branch and the
  APACHE_2_0_BRANCH are untouched and unharmed.  --jwoolley

RELEASE SHOWSTOPPERS:

* Handling of non-trailing / config by non-default handler is broken
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=105451701628081w=2

* the edge connection filter cannot be removed 
  http://marc.theaimsgroup.com/?l=apache-httpd-devm=105366252619530w=2

CURRENT VOTES:

* Promote mod_cache from experimental to non-experimental
  status (keep issues noted below in EXPERIMENTAL MODULES as
  items to be addressed as a supported module).
  +1: jim, bnicholes
  -0: jerenkrantz
  -1: stoddard
  There are a couple of problems that need to be resolved
  before this module is moved out of experimental. 
  1) We need to at least review and comment on the RFC violations
  2) Resolve issue of how to cache page fragements (or perhaps -if- we
  want to cache page fragements). Today, mod_cache/mod_mem_cache
  will cache #include 'virtual' requests (but not #include 'file' 
  requests). This was accomplished by making CACHE_IN a
  CONTENT_SET-1 filter to force it to run before the SUBREQ_CORE
  filter.  But now responses cannot be cached that include the
  effects of having been run through CONTENT_SET filters
  (mod_deflate, mod_expires, etc).  We could rerun all the
  CONTENT_SET filters on the cached response, but this will not
  work in all cases. For example, mod_expires relies on installing
  the EXPIRATION filter during fixups. Contents served out of
  mod_cache (out of the quick_handler) bypass -all- the request
  line server hooks (Ryan really hated this. It is great for
  performance, but bad because of the complications listed above).
 

  jerenkrantz: There are a slew of RFC compliance bugs filed in Bugzilla
   for mod_cache (see 'RFC 2616 violations' below).  I think
   fixing them is a pre-requisite before it isn't experimental.

* httpd-std.conf and friends

  a) httpd-std.conf should be tailored by install (from src or
 binbuild) even if user has existing httpd.conf
 +1:   trawick, slive, gregames, ianh, Ken, wrowe, jwoolley, jim, nd,
   erikabele
   wrowe - prefer httpd.default.conf to avoid ambiguity with cvs

  b) tailored httpd-std.conf should be copied by install to
 sysconfdir/examples
 -0:   striker

  c) tailored httpd-std.conf should be installed to
 sysconfdir/examples or manualdir/exampleconf/
 +1:   slive, trawick, Ken, nd (prefer the latter), erikabele

  d) Installing a set of default config files when upgrading a server
 doesn't make ANY sense at all.
 +1:   ianh - medium/big sites don't use 'standard config' anyway, as it
  usually needs major customizations
 -1:   Ken, wrowe, jwoolley, jim, nd, erikabele
   wrowe - diff is wonderful when comparing old/new default configs,
   even for customized sites that ianh mentions
   jim - ... assuming that the default configs have been updated
 with the required inline docs to explain the
 changes

* If the parent process dies, should the remaining child processes
  gracefully self-terminate. Or maybe we should make it a runtime
  option, or have a concept of 2 parent processes (one being a 
  hot spare).
  See: Message-ID: [EMAIL PROTECTED]

  Self-destruct: Ken, Martin, Lars
  Not self-destruct: BrianP, Ian, Cliff, BillS
  Make it runtime configurable: Aaron, jim, Justin, wrowe, rederpj, nd

  /* The below was a concept on *how* to handle the problem */
  Have 2 parents: +1: jim
  -1: Justin, wrowe, rederpj, nd
  +0: Lars, Martin (while standing by, could it do
something useful?)

* Make the worker MPM the default MPM for threaded Unix boxes.
  +1:   Justin, Ian, Cliff, BillS, striker, wrowe, nd
  +0:   BrianP, Aaron