Re: httpd-1.3 patchlets

2005-08-07 Thread Sander Temme

That's wrowe, Mads and...

Thanks,

S.

On Jul 20, 2005, at 8:40 AM, William A. Rowe, Jr. wrote:


+1 on both patches; I can see how libhttpd.so gets stripped today.

I'd commit if there were a couple more +1's.

Bill

At 08:16 AM 7/20/2005, Sander Temme wrote:


Two very small patches against 1.3.

First one, make ab default to the highest SSL version available:

Index: src/support/ab.c
===
--- src/support/ab.c(revision 125243)
+++ src/support/ab.c(working copy)
@@ -1655,7 +1655,7 @@

#ifdef USE_SSL
SSL_library_init();
-if (!(ctx = SSL_CTX_new(SSLv2_client_method( {
+if (!(ctx = SSL_CTX_new(SSLv23_client_method( {
   fprintf(stderr, "Could not init SSL CTX: ");
   ERR_print_errors_fp(stderr);
   exit(1);

Secondly, a patch that keeps --without-execstrip from stripping the
httpd binary:

Index: configure
===
--- configure   (revision 219524)
+++ configure   (working copy)
@@ -927,6 +927,8 @@
;;
--without-execstrip)
iflags_program=`echo "$iflags_program" | sed -e 's/-s//'`
+iflags_core=`echo "$iflags_core" | sed -e 's/-S//' -e  
's/ \"-S\"//'`
+iflags_dso=`echo "$iflags_dso" | sed -e 's/-S//' -e  
's/ \"-S\"//'`

;;
--suexec-caller=*)
suexec_caller="$apc_optarg"

There is a special case for Darwin in configure that makes the httpd
binary get stripped even if --without-execstrip is specified. This
stops that from happening, so --without-execstrip leaves all binaries
unstripped. I think this adheres to the principle of least  
astonishment.


Let me know if you can fudge that in. (:

Thanks,

S.

--
[EMAIL PROTECTED]  http://www.temme.net/sander/
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF









--
[EMAIL PROTECTED]  http://www.temme.net/sander/
PGP FP: 51B4 8727 466A 0BC3 69F4  B7B8 B2BE BC40 1529 24AF



smime.p7s
Description: S/MIME cryptographic signature


Re: [patch 1.3] The http_protocol.c C-L + T-E patch

2005-08-07 Thread William A. Rowe, Jr.
Still looking for a vote on this fix to core for 1.3, preventing
modules from seeing an invalid C-L + T-E combination from the
client per RFC 2616.  This does not apply to proxy (as implemented
now) but may affect other handlers as I noted below.  The sanest
action seems to be; adopt our 2.0 core change.

The clean patch to backport to 1.3 is at 

  http://people.apache.org/~wrowe/httpd-1.3-proto-cl-te.patch

With respect to fixes in individual modules, one should still
remember that this isn't a panacea, it's still possible for any
other invalid module to reinsert a content-length input header
at your handler before it's invoked.  But it seems worthwhile
to go ahead and fix the 80/20 with these 3 lines of code already
committed to trunk and 2.0.x.

Bill

At 04:36 PM 7/19/2005, William A. Rowe, Jr. wrote:
>At 04:11 PM 7/19/2005, Joe Orton wrote:
>>On Tue, Jul 19, 2005 at 02:59:14PM -0500, William Rowe wrote:
>>> Paul?  Joe?  Jeff?  Someone?
>>> 
>>> This is the only showstopper to a 1.3.34 candidate today, 
>>> since 1.3.x/src/modules/proxy/mod_proxy.c rejects T-E 
>>> for proxy request bodies.
>>
>>Since the 1.3 proxy already rejects such requests what does this patch 
>>actually fix?
>
>Hmmm...
>
>  mod_isapi?
>  mod_php?
>  mod_cgi?
>  mod_jk?
>
>shall I keep digging?





Re: svn commit: r230592 - in /httpd/httpd/branches/2.0.x: CHANGES STATUS modules/proxy/proxy_http.c

2005-08-07 Thread William A. Rowe, Jr.
At 01:15 PM 8/7/2005, Joe Orton wrote:
>On Sat, Aug 06, 2005 at 06:54:45PM -0500, William Rowe wrote:
>
>> Why do you bring this up now when I mentioned that I had vetoed
>> the change a good three weeks ago, in STATUS, and advised on
>> list that it would be reverted?  
>
>Because you putting random crap in STATUS is meaningless.  The R-T-C 
>process under which the 2.0.x tree is maintained is not.

Ahhh, so your crap in STATUS is called "process", while my crap
in STATUS is called "random crap"?  If you didn't agree with my
ability to veto this unreleased, already committed patch you
were welcome to add 2c, your choice of denomination, when I had
changed STATUS.  And I would have looked around two weeks ago
and seen that a late veto was invalid. And I'm agreeing with you, 
after looking at voting.html, which goes back to 1996.  I don't 
agree with the policy, as this patch hasn't 'left' Apache yet, but 
I agree the policy is clear.

So feel free to cut the crap and start talking to the code, your
comments and attitude have been way out of bounds.

Bottom line; trunk/ had diverged too far from 2.0.x/ - comparing
proxy_http to mod_proxy_http was no longer possible, making it
too difficult to see the changes simply.  You are asking to play
hand-me-a-rock, so I'm pelting you with 25 of them.  But if there
is anything you don't like at this point, I'm so thoroughly
disgusted with the state of proxy, and the fact that the HTTP
request and response vulnerability reports, from very early on,
interested way too few folks of our [EMAIL PROTECTED] team, that 
you are welcome to pick up the resulting boulder and lug it 
around yourself, if you prefer I not svn cp the resulting history 
back to httpd-2.0 after 3 +1's.  Please don't even bother asking 
me to bring you any more rocks, this has cost me dear in sleep
and energy that should have been spent elsewhere.

If anyone considers reviewing each of those 25 commits individually
to be sufficient to ensure the new code is proper, I challenge them
to look at the resulting overall code.  It's the small incremental
reviews that let the junk which has accumulated keep piling up.
When blindly +1'ing patches, it's good to read more than 3 lines
back and 3 lines forward.  This is why I (should have earlier)
vetoed Jeff's patch; what he did was cool, but the propagated 
mistakes in CL/TE elections and other issues became a bigger mess 
with the addition of the new three-mode body feature.

Anyways, I trust both you and Jeff find the incremental layers
I've committed satisfactory for review; I didn't commit them in
the same order as they occured in 2.1, I committed them in the
most reasonable order for a dedicated reviewer to understand the
entire scope of changes one piece at a time.  I tossed in the last
few just so that you could see *exactly* what is now different
between trunk/ and 2.0.x/, and decide for yourself if they should 
differ in the manner they do.

Bill
   



[PATCH] add "remove empty directories" option to htcacheclean

2005-08-07 Thread Colm MacCarthaigh
On Mon, Aug 08, 2005 at 12:04:44AM +0100, Colm MacCarthaigh wrote:
> Well that's a pretty each race to solve within httpd. It won't be able
> to create the headers, or the body. The patch I've submitted cleans up
> that slight race. The file won't be cached on that serve, but I don't
> think that's a big deal :-)

And since nothing says it like code, here's a patch to htcacheclean
which adds a -t option to clean out empty directories.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Index: docs/man/htcacheclean.8
===
--- docs/man/htcacheclean.8 (revision 230717)
+++ docs/man/htcacheclean.8 (working copy)
@@ -19,7 +19,7 @@
 .el .ne 3
 .IP "\\$1" \\$2
 ..
-.TH "HTCACHECLEAN" 8 "2004-11-10" "Apache HTTP Server" "htcacheclean"
+.TH "HTCACHECLEAN" 8 "2005-08-08" "Apache HTTP Server" "htcacheclean"
 
 .SH NAME
 htcacheclean \- Clean up the disk cache
@@ -27,10 +27,10 @@
 .SH "SYNOPSIS"
  
 .PP
-\fBhtcacheclean\fR [ -\fBD\fR ] [ -\fBv\fR ] [ -\fBr\fR ] [ -\fBn\fR ] 
-\fBp\fR\fIpath\fR -\fBl\fR\fIlimit\fR
+\fBhtcacheclean\fR [ -\fBD\fR ] [ -\fBv\fR ] [ -\fBt\fR ] [ -\fBr\fR ] [ 
-\fBn\fR ] -\fBp\fR\fIpath\fR -\fBl\fR\fIlimit\fR
  
 .PP
-\fBhtcacheclean\fR -\fBb\fR [ -\fBn\fR ] [ -\fBi\fR ] -\fBd\fR\fIinterval\fR 
-\fBp\fR\fIpath\fR -\fBl\fR\fIlimit\fR
+\fBhtcacheclean\fR -\fBb\fR [ -\fBn\fR ] [ -\fBt\fR ] [ -\fBi\fR ] 
-\fBd\fR\fIinterval\fR -\fBp\fR\fIpath\fR -\fBl\fR\fIlimit\fR
  
 
 .SH "SUMMARY"
@@ -53,11 +53,14 @@
 Be verbose and print statistics\&. This option is mutually exclusive with the 
-d option\&.  
 .TP
 -r
-Clean thoroughly\&. This assumes that the Apache web server is not running 
(otherwise you may get garbage in the cache)\&. This option is mutually 
exclusive with the -d option\&.  
+Clean thoroughly\&. This assumes that the Apache web server is not running 
(otherwise you may get garbage in the cache)\&. This option is mutually 
exclusive with the -d option and implies the -t option\&.  
 .TP
 -n
 Be nice\&. This causes slower processing in favour of other processes\&. 
htcacheclean will sleep from time to time so that (a) the disk IO will be 
delayed and (b) the kernel can schedule other processes in the meantime\&.  
 .TP
+-t
+Delete all empty directories\&. By default only cache files are removed, 
however with some configurations the large number of directories created may 
require attention\&. If your configuration requires a very large number of 
directories, to the point that inode or file allocation table exhaustion may 
become an issue, use of this option is advised\&.  
+.TP
 -p\fIpath\fR
 Specify \fIpath\fR as the root directory of the disk cache\&. This should be 
the same value as specified with the CacheRoot directive\&.  
 .TP
Index: docs/manual/programs/htcacheclean.html.en
===
--- docs/manual/programs/htcacheclean.html.en   (revision 230717)
+++ docs/manual/programs/htcacheclean.html.en   (working copy)
@@ -39,6 +39,7 @@
 htcacheclean
 [ -D ]
 [ -v ]
+[ -t ]
 [ -r ]
 [ -n ]
 -ppath
@@ -46,6 +47,7 @@
 
 htcacheclean -b
 [ -n ]
+[ -t ]
 [ -i ]
 -dinterval
 -ppath
@@ -71,7 +73,8 @@
 -r
 Clean thoroughly. This assumes that the Apache web server is
 not running (otherwise you may get garbage in the cache). This option
-is mutually exclusive with the -d option.
+is mutually exclusive with the -d option and implies
+the -t option.
 
 -n
 Be nice. This causes slower processing in favour of other
@@ -79,6 +82,14 @@
 so that (a) the disk IO will be delayed and (b) the kernel can schedule
 other processes in the meantime.
 
+-t
+Delete all empty directories. By default only cache files are
+removed, however with some configurations the large number of
+directories created may require attention. If your configuration
+requires a very large number of directories, to the point that
+inode or file allocation table exhaustion may become an issue, use 
+of this option is advised.
+
 -ppath
 Specify path as the root directory of the disk cache. This
 should be the same value as specified with the CacheRoot directive.
Index: docs/manual/programs/htcacheclean.xml
===
--- docs/manual/programs/htcacheclean.xml   (revision 230717)
+++ docs/manual/programs/htcacheclean.xml   (working copy)
@@ -39,6 +39,7 @@
 htcacheclean
 [ -D ]
 [ -v ]
+[ -t ]
 [ -r ]
 [ -n ]
 -ppath
@@ -46,6 +47,7 @@
 
 htcacheclean -b
 [ -n ]
+[ -t ]
 [ -i ]
 -dinterval
 -ppath
@@ -71,7 +73,8 @@
 -r
 Clean thoroughly. This assumes that the Apache web server is
 not running (otherwise you may get garbage in the cache). This option
-is mutually exclusive with the -d option.
+is mutually exclusive with t

Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread Colm MacCarthaigh
On Mon, Aug 08, 2005 at 12:45:21AM +0200, [EMAIL PROTECTED] wrote:
> Is a traversal really needed? What about going back the full path of the
> header / data file to the cache root and removing each component on the
> way by calling apr_dir_remove on each component until it fails?

I'm not sure if apr_dir_remove guarantees failure when operated on
non-empty directories. If it does then that's an easy enough way.

> > Are 404's being served incorrectly in some circumstances? 
> 
> You are right that 404's do not get cached. But if a cached resource
> vanishes on the backend the cache entry is not removed. 

Aha, now I understand what this patch is meant to do. 

> It is needed because:
> 
> - In the case of an internal Apache 404 error page the content filter
> chain is not run (especially not CACHE_SAVE_FILTER). This is the
> reason why cache_removal_url is a protocol filter.
> 
> - In the case of an user specified error page with ErrorDocument the
> CACHE_SAVE_FILTER is run with the wrong request (the one that belongs
> to the custom error page, not the one of the original request).

Makes sense, O.k., now looking at it and knowing what it is supposed to
do, it looks fine. The only things I've noticed are;

the obviously mis-copied CACHE_SAVE coment in 
cache_remove_url_filter()  :-)

The extraneous "-e debug" comments in mod_disk_cache

In mod_disk_cache, you try to delete the data file even
if removing the header file was unsuccesful. Personally
I wouldn't be very comfortable with this, as the header
is a useful source of information to an adminstrator 
tracking down problems and it's only easy way to determine
what the data file is. If you can't delete the header 
file, I'd recommend leaving the data file in-place. They 
make more sense if they are both in the same state.

In cache_remove_url, you have code which tries to
determine if the cache->handle or cache->stale_handle
should be removed, but shouldn't it always be the
stale_handle? You only add the remove_url filter if
cache_select_url() != OK, which means cache->handle
will always be NULL.

But apart from those looks fine. I'll merge it with my small
patch and test it properly tomorrow.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread Colm MacCarthaigh
On Mon, Aug 08, 2005 at 12:07:09AM +0200, Andreas Steinmetz wrote:
> Colm MacCarthaigh wrote:
> > For what it's worth though, htcacheclean itself has this massive bug,
> > and does not do any directory cleanup, so your patch isn't alone in
> > doing this.
> 
> The problem is that you can't remove directories with htcacheclean
> without generating race conditions wrt. httpd.
> 
> Assume that htcacheclean removes the last entries from a directory and
> then removes the directory. At the same time httpd wants to use the
> directory as it was already there...

Well that's a pretty each race to solve within httpd. It won't be able
to create the headers, or the body. The patch I've submitted cleans up
that slight race. The file won't be cached on that serve, but I don't
think that's a big deal :-)

> You'll be better off setting CacheDirLength and CacheDirLevels to
> sensible values. Try:
> 
> CacheDirLength 1
> CacheDirLevels 2
> 
> You can get at most 64^2 directories which is 4096 directories.

But when the link count for the directories themselves will get very
large indeed, and common filesystems like ext3 or XFS will get pretty
slow indeed. The whole point of allowing them to be split up is to avoid
this :)

>  Any
> reasonable filesystem can stand that. Now assume an average of 500
> cached objects per directory which the filesystem should easily manage.
> I tend to believe that 2048000 cached objects is quite a lot. If the
> size of these objects has an average of only 1KB you already have a
> total cache size of 2GB.

Well I run a cache which is 135Gb, which regularly has nearly 50 million
jpegs in it (revisions of satelite imagery of the entire planet, not
porn), but my numbers are always insane. For another example;

http://ftp.heanet.ie/status/

I turned on the cache only about 2 hours ago (after applying the patch
I've sent) and it's already at 537,029 cached files. (though it fills in
spurts, as other projects pull from us).

> Even if you tried 3 for CacheDirLevels it would be 262144 directories
> maximum which should be still fine for any reasonable filesystem.

You're right, and I'm being a bit over the top with my numbers and
examples, but I still think there is danger of inode exhaustion in
real-world configurations. 

I've run proxy clusters in the past which handled tens of millions of
requests per day. I don't see why a cluster of Apache 2 proxies
shouldn't be able to share a network accessible mod_disk_cache cache
area, for example.

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread r . pluem


Colm MacCarthaigh wrote:
> On Sun, Aug 07, 2005 at 09:59:15PM +0200, [EMAIL PROTECTED] wrote:
> 
>>As you already mentioned the remove_url implementation
>>of mod_disk_cache is currently an empty dummy :-).
> 
> 
> I've been thinking about that, but it's not entirely as easy as it first
> seems, or indeed as htcacheclean wrongly assumes.
> 
> unlinking the data/header files is not good enough, directories are also
> resources, and consume space in the inode table. Also, most filesystems
> slow down as the ammount of hard-links in any directory increases. 

Indeed a very valid point. So, yes as far as possible all directories should be
removed, when the header and the data files get removed.
But additionally my experience with some filesystem types is that removing
files from a directory is not enough to speed things up, because the directories
itself do not shrink even if entries get removed from the directory
(I noticed this behaviour with ext3 on Linux and ufs / Veritas FS on Solaris, 
wheras
reiserfs shrinks). This could pose a problem to the cache root where all the
temporary data files get created before being moved to the correct hashed 
directory.

[..cut..]

> 
> Because of Windows and non-standard filesystems like AFS (read the GNU
> find(1) manpage section on -noleaf) assumptions about the directory link
> counts can't be made, which means a full-scale readdir() and directory
> traversal to remove the cache entry. This can be pretty a prettly slow
> operation. Possibly slow enough to merit implementing it in an
> APR_HOOK_REALLY_LAST hook, so that it can avoid slowing down the request
> serving.

Is a traversal really needed? What about going back the full path of the
header / data file to the cache root and removing each component on the
way by calling apr_dir_remove on each component until it fails?


[..cut..]

>>
>>I would really appreciate if you find some time to review my patch.
> 
> 
> I'm not a committer, so my review is only informational. But I'm
> familiar enough with the cache mode, I've been running and patching it
> for my own purposes for years, so I've taken a look.

I also appreciate comments from non-commiters, as

 - this will (hopefully) draw the attention of the commiters.
 - improves the patch

> 
> I'm not really sure what it is aiming to do in relation to 404's.  404's
> are never saved to the cache in the first place, the check in
> mod_cache.c sorts that out;
> 
> if (r->status != HTTP_OK && r->status != HTTP_NON_AUTHORITATIVE
> && r->status != HTTP_MULTIPLE_CHOICES
> && r->status != HTTP_MOVED_PERMANENTLY
> && r->status != HTTP_NOT_MODIFIED) {
> /* RFC2616 13.4 we are allowed to cache 200, 203, 206, 300, 301 or 410
>  * We don't cache 206, because we don't (yet) cache partial responses.
>  * We include 304 Not Modified here too as this is the origin server
>  * telling us to serve the cached copy.
>  */
> reason = apr_psprintf(p, "Response status %d", r->status);
> }
> 
> Are 404's being served incorrectly in some circumstances? 

You are right that 404's do not get cached. But if a cached resource vanishes 
on the backend
the cache entry is not removed. This

- fills up the cache
- leads to the delivery of the cached resource when the request does not force 
mod_cache
  to revalidate the entry. This is even the case *if* there had been a request 
before which
  forced mod_cache to revalidate the cache entry and the revalidation found out 
that the resource
  has vanished on the backend.

> 
> In any case;
> 
> Your remove_url code in mod_disk_cache takes the approach of just
> unlinking the data and header file I discussed above. But well, since
> so does htcacheclean, that's no reflection on the quality of the patch.

See my comments above. Your points are very valid. As I do not want to put
too much into a single patch I currently do not intend to adjust this, but
I definitely will investigate this for a second step.

> 
> I don't think the cache_removal_url filter is neccessary, the
> cache_save_filter already has a lot of code in place for handling
> the case of a stale cache handle where I think it is better placed.

It is needed because:

- In the case of an internal Apache 404 error page the content filter chain
  is not run (especially not CACHE_SAVE_FILTER). This is the reason why
  cache_removal_url is a protocol filter.

- In the case of an user specified error page with ErrorDocument the 
CACHE_SAVE_FILTER
  is run with the wrong request (the one that belongs to the custom error page, 
not
  the one of the original request).


[..cut..]

Regards

Rüdiger


Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread Andreas Steinmetz
Colm MacCarthaigh wrote:
> For what it's worth though, htcacheclean itself has this massive bug,
> and does not do any directory cleanup, so your patch isn't alone in
> doing this.

The problem is that you can't remove directories with htcacheclean
without generating race conditions wrt. httpd.

Assume that htcacheclean removes the last entries from a directory and
then removes the directory. At the same time httpd wants to use the
directory as it was already there...

You'll be better off setting CacheDirLength and CacheDirLevels to
sensible values. Try:

CacheDirLength 1
CacheDirLevels 2

You can get at most 64^2 directories which is 4096 directories. Any
reasonable filesystem can stand that. Now assume an average of 500
cached objects per directory which the filesystem should easily manage.
I tend to believe that 2048000 cached objects is quite a lot. If the
size of these objects has an average of only 1KB you already have a
total cache size of 2GB.
Even if you tried 3 for CacheDirLevels it would be 262144 directories
maximum which should be still fine for any reasonable filesystem.
-- 
Andreas Steinmetz   SPAMmers use [EMAIL PROTECTED]


Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread Colm MacCarthaigh
On Sun, Aug 07, 2005 at 09:59:15PM +0200, [EMAIL PROTECTED] wrote:
> As you already mentioned the remove_url implementation
> of mod_disk_cache is currently an empty dummy :-).

I've been thinking about that, but it's not entirely as easy as it first
seems, or indeed as htcacheclean wrongly assumes.

unlinking the data/header files is not good enough, directories are also
resources, and consume space in the inode table. Also, most filesystems
slow down as the ammount of hard-links in any directory increases. 

The hash function used by mod_cache is 128-bits in size, which given any
conceivable settings for CacheDirLevels and CacheDirLength is still
going to be more than enough to exhaust the inode table on any rational
filesystem. Considering that one of the main uses of mod_cache is for
mod_proxy users, where there is an infinity of keys (urls) this means
that the filesystem will eventually become unusable unless the
adminstator periodically weighs in an removes empty directories
manually.

Because of Windows and non-standard filesystems like AFS (read the GNU
find(1) manpage section on -noleaf) assumptions about the directory link
counts can't be made, which means a full-scale readdir() and directory
traversal to remove the cache entry. This can be pretty a prettly slow
operation. Possibly slow enough to merit implementing it in an
APR_HOOK_REALLY_LAST hook, so that it can avoid slowing down the request
serving.

An alternative which I think was discussed here is to create a cgid-like
process which deals with this kind of task, aswell as more complex
things like managing just-to-expire files. But that kind of steps on the
toes of the cache_requester SoC work.

Either way, implementing an immediate unlink() on the files, just to be
able to return useful things about the writability of the filesystem,
but leave the more complex directory cleanup until the file has been
served is what I'm thinking of.

For what it's worth though, htcacheclean itself has this massive bug,
and does not do any directory cleanup, so your patch isn't alone in
doing this.

> I had a similar problem with 404 responses, and wrote a patch for this which 
> is
> currently in discussion (attached patch again to this mail):
> 
> http://mail-archives.apache.org/mod_mbox/httpd-dev/200507.mbox/[EMAIL 
> PROTECTED]
> 
> It actually does implement a removal of the files in mod_disk_cache and
> should also handle your problem. If it does not, I am pretty sure that a 
> small modification
> to the patch would do it.
> 
> I would really appreciate if you find some time to review my patch.

I'm not a committer, so my review is only informational. But I'm
familiar enough with the cache mode, I've been running and patching it
for my own purposes for years, so I've taken a look.

I'm not really sure what it is aiming to do in relation to 404's.  404's
are never saved to the cache in the first place, the check in
mod_cache.c sorts that out;

if (r->status != HTTP_OK && r->status != HTTP_NON_AUTHORITATIVE
&& r->status != HTTP_MULTIPLE_CHOICES
&& r->status != HTTP_MOVED_PERMANENTLY
&& r->status != HTTP_NOT_MODIFIED) {
/* RFC2616 13.4 we are allowed to cache 200, 203, 206, 300, 301 or 410
 * We don't cache 206, because we don't (yet) cache partial responses.
 * We include 304 Not Modified here too as this is the origin server
 * telling us to serve the cached copy.
 */
reason = apr_psprintf(p, "Response status %d", r->status);
}

Are 404's being served incorrectly in some circumstances? 

In any case;

Your remove_url code in mod_disk_cache takes the approach of just
unlinking the data and header file I discussed above. But well, since
so does htcacheclean, that's no reflection on the quality of the patch.

I don't think the cache_removal_url filter is neccessary, the
cache_save_filter already has a lot of code in place for handling
the case of a stale cache handle where I think it is better placed.

The (much smaller) patch I've just submitted, plus your changes to
mod_disk_cache, mod_mem_cache and cache_storage.c would do the same job
with a third of the code :)

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]


Bug report for Apache httpd-2.0 [2005/08/07]

2005-08-07 Thread bugzilla
+---+
| Bugzilla Bug ID   |
| +-+
| | Status: UNC=Unconfirmed NEW=New ASS=Assigned|
| | OPN=ReopenedVER=Verified(Skipped Closed/Resolved)   |
| |   +-+
| |   | Severity: BLK=Blocker CRI=CriticalMAJ=Major |
| |   |   MIN=Minor   NOR=Normal  ENH=Enhancement   |
| |   |   +-+
| |   |   | Date Posted |
| |   |   |  +--+
| |   |   |  | Description  |
| |   |   |  |  |
| 7483|Ass|Enh|2002-03-26|Add FileAction directive to assign a cgi interpret|
| 7741|Ass|Nor|2002-04-04|some directives may be placed outside of proper co|
| 7862|New|Enh|2002-04-09|suexec never log a group name.|
| 8483|Inf|Min|2002-04-24|apache_2.0 .msi installer breaks .log and .conf fi|
| 8713|New|Min|2002-05-01|No Errorlog on PROPFIND/Depth:Infinity|
| 8925|New|Cri|2002-05-09|Service Install (win32 .msi/.exe) fails for port i|
| 9727|New|Min|2002-06-09|Double quotes should be flagged as T_HTTP_TOKEN_ST|
| 9903|Opn|Maj|2002-06-16|mod_disk_cache does not remove temporary files|
| 9945|New|Enh|2002-06-18|[PATCH] new funtionality for apache bench |
|10114|Ass|Enh|2002-06-21|Negotiation gives no weight to order, only q value|
|10154|Ass|Nor|2002-06-23|ApacheMonitor interferes with service uninstall/re|
|10722|Opn|Nor|2002-07-12|ProxyPassReverse doesn't change cookie paths  |
|10775|Ass|Cri|2002-07-13|SCRIPT_NAME wrong value   |
|10932|Opn|Enh|2002-07-18|Allow Negative regex in LocationMatch |
|11035|New|Min|2002-07-22|Apache adds double entries to headers generated by|
|11294|New|Enh|2002-07-30|desired vhost_alias option|
|11427|Opn|Maj|2002-08-02|Possible Memory Leak in CGI script invocation |
|11540|Opn|Nor|2002-08-07|ProxyTimeout ignored  |
|11580|Opn|Enh|2002-08-09|generate Content-Location headers |
|11971|Opn|Nor|2002-08-23|HTTP proxy header "Via" with wrong hostname if Ser|
|11997|Opn|Maj|2002-08-23|Strange critical errors possibly related to mpm_wi|
|12033|Opn|Nor|2002-08-26|Graceful restart immidiately result in [warn] long|
|12340|Opn|Nor|2002-09-05|WindowsXP proxy, child process exited with status |
|12355|Opn|Nor|2002-09-06|SSLVerifyClient directive in location make post to|
|12680|New|Enh|2002-09-16|Digest authentication with integrity protection   |
|12885|New|Enh|2002-09-20|windows 2000 build information: mod_ssl, bison, et|
|13029|New|Nor|2002-09-26|Win32 mod_cgi failure with non-ASCII characters in|
|13101|Inf|Cri|2002-09-27|Using mod_ext_filter with mod_proxy and http/1.1 c|
|13507|New|Enh|2002-10-10|capturing stderr from mod_cgi |
|13577|New|Maj|2002-10-13|mod_proxy mangles query string with mod_rewrite   |
|13599|Ass|Nor|2002-10-14|autoindex formating broken for multibyte sequences|
|13603|New|Nor|2002-10-14|incorrect DOCUMENT_URI in mod_autoindex with Heade|
|13661|Ass|Enh|2002-10-15|Apache cannot not handle dynamic IP reallocation  |
|13946|Inf|Nor|2002-10-24|reverse proxy errors when a document expires from |
|13986|Ass|Enh|2002-10-26|remove default MIME-type  |
|14016|Inf|Nor|2002-10-28|Problem when using mod_ext_filter with ActivePerl |
|14090|New|Maj|2002-10-30|mod_cgid always writes to main server error log   |
|14206|New|Nor|2002-11-04|DirectoryIndex circumvents -FollowSymLinks option |
|14227|Ass|Nor|2002-11-04|Error handling script is not started (error 500) o|
|14335|Opn|Enh|2002-11-07|AddOutputFilterByType doesn't work with proxy requ|
|14496|New|Enh|2002-11-13|Cannot upgrade 2.0.39 -> 2.0.43. Must uninstall fi|
|14556|Inf|Nor|2002-11-14|mod_cache with mod_mem_cache enabled doesnt cash m|
|14750|Inf|Maj|2002-11-21|Windows 9x: apr_socket_opt_set cannot set SO_KEEPA|
|14858|New|Enh|2002-11-26|mod_cache never caches responses for requests requ|
|14922|Ass|Enh|2002-11-28| is currently hardcoded to 'apache2'  |
|15045|Ass|Nor|2002-12-04|addoutputfilterbytype doesn't work for defaulted t|
|15221|New|Nor|2002-12-10|reference to old script: sign.sh  |
|15233|Opn|Nor|2002-12-10|move AddType application/x-x509-ca-cert from ssl.c|
|15235|New|Nor|2002-12-10|add application/x-x509-email-cert, application/x-x|
|15625|New|Nor|2002-12-23|mention mod_ssl in http://nagoya.apache.org/dist/h|
|15626|New|Nor|2002-12-23|mention which modules are part of the (binary) dis|
|15631|New|Nor|

Bug report for Apache httpd-1.3 [2005/08/07]

2005-08-07 Thread bugzilla
+---+
| Bugzilla Bug ID   |
| +-+
| | Status: UNC=Unconfirmed NEW=New ASS=Assigned|
| | OPN=ReopenedVER=Verified(Skipped Closed/Resolved)   |
| |   +-+
| |   | Severity: BLK=Blocker CRI=CriticalMAJ=Major |
| |   |   MIN=Minor   NOR=Normal  ENH=Enhancement   |
| |   |   +-+
| |   |   | Date Posted |
| |   |   |  +--+
| |   |   |  | Description  |
| |   |   |  |  |
| 8329|New|Nor|2002-04-20|mime_magic gives 500 and no error_log on Microsoft|
| 8372|Ass|Nor|2002-04-22|Threadsaftey issue in Rewrite's cache [Win32/OS2/N|
| 8849|New|Nor|2002-05-07|make install errors as root on NFS shares |
| 8882|New|Enh|2002-05-07|[PATCH] mod_rewrite communicates with external rew|
| 9037|New|Min|2002-05-13|Slow performance when acessing an unresolved IP ad|
| 9126|New|Blk|2002-05-15|68k-next-openstep v. 4.0  |
| 9726|New|Min|2002-06-09|Double quotes should be flagged as T_HTTP_TOKEN_ST|
| 9894|New|Maj|2002-06-16|getline sub in support progs collides with existin|
| |New|Nor|2002-06-19|Incorrect default manualdir value with layout.|
|10038|New|Min|2002-06-20|ab benchmaker hangs on 10K https URLs with keepali|
|10073|New|Maj|2002-06-20|upgrade from 1.3.24 to 1.3.26 breaks include direc|
|10169|New|Nor|2002-06-24|Apache seg faults due to attempt to access out of |
|10178|New|Maj|2002-06-24|Proxy server cuts off begining of buffer when spec|
|10195|New|Nor|2002-06-24|Configure script erroneously detects system Expat |
|10199|New|Nor|2002-06-24|Configure can't handle directory names with unders|
|10243|New|Maj|2002-06-26|CGI scripts not getting POST data |
|10354|New|Nor|2002-06-30|ErrorDocument(.htaccess) fails when passed URL wit|
|10446|Opn|Blk|2002-07-03|spaces in link to http server seen as foreign char|
|10470|New|Cri|2002-07-04|proxy module will not correctly serve mixed case f|
|10666|New|Enh|2002-07-10|line-end comment error message missing file name  |
|10744|New|Nor|2002-07-12|suexec might fail to open log file|
|10747|New|Maj|2002-07-12|ftp SIZE command and 'smart' ftp servers results i|
|10760|New|Maj|2002-07-12|empty ftp directory listings from cached ftp direc|
|10939|New|Maj|2002-07-18|directory listing errors  |
|11020|New|Maj|2002-07-21|APXS only recognise tests made by ./configure |
|11236|New|Min|2002-07-27|Possible Log exhaustion bug?  |
|11265|New|Blk|2002-07-29|mod_rewrite fails to encode special characters|
|11765|New|Nor|2002-08-16|.apaci.install.tmp installs in existing httpd.conf|
|11986|New|Nor|2002-08-23|Restart hangs when piping logs on rotation log pro|
|12096|New|Nor|2002-08-27|apxs does not handle binary dists installed at non|
|12574|New|Nor|2002-09-12|Broken images comes from mod_proxy when caching ww|
|12583|New|Nor|2002-09-12|First piped log process do not handle SIGTERM |
|12598|Opn|Maj|2002-09-12|Apache hanging in Keepalive State |
|13188|New|Nor|2002-10-02|does not configure correctly for hppa64-hp-hpux11.|
|13274|Ass|Nor|2002-10-04|Subsequent requests are destroyed by the request e|
|13607|Opn|Enh|2002-10-14|Catch-all enhancement for vhost_alias?|
|13687|New|Min|2002-10-16|Leave Debug symbol on Darwin  |
|13822|New|Maj|2002-10-21|Problem while running Perl modules accessing CGI::|
|14095|Opn|Nor|2002-10-30|Change default Content-Type (DefaultType) in defau|
|14250|New|Maj|2002-11-05|Alternate UserDirs don't work intermittantly  |
|14443|New|Maj|2002-11-11|Keep-Alive randomly causes TCP RSTs   |
|14448|Opn|Cri|2002-11-11|Apache WebServer not starting if installed on Comp|
|14518|Opn|Nor|2002-11-13|QUERY_STRING parts not incorporated by mod_rewrite|
|14670|New|Cri|2002-11-19|Apache didn't deallocate unused memory|
|14748|New|Nor|2002-11-21|Configure Can't find DBM on Mac OS X  |
|15011|New|Nor|2002-12-03|Apache processes not timing out on Solaris 8  |
|15028|New|Maj|2002-12-03|RedirectMatch does not escape properly|
|15242|New|Blk|2002-12-10|mod_cgi prevents handling of OPTIONS request  |
|16236|New|Maj|2003-01-18|Include directive in Apache is not parsed within c|
|16241|New|Maj|2003-01-19|Apache processes takes 100% CPU until killed manua|
|16492|New|Maj|2003-01-28|mod_proxy doesn't correctly retrieve values from C|
|16493|

Re: svn commit: r230592 - in /httpd/httpd/branches/2.0.x: CHANGES STATUS modules/proxy/proxy_http.c

2005-08-07 Thread William A. Rowe, Jr.
At 08:39 PM 8/6/2005, Jeff Trawick wrote:
>On 8/6/05, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>> Author: wrowe
>> Date: Sat Aug  6 14:29:05 2005
>> New Revision: 230592
>> 
>> URL: http://svn.apache.org/viewcvs?rev=230592&view=rev
>> Log:
>> 
>>  As much as it pains me, seriously, it seems that reviewing the re-backport
>>  of this code was too illegible for review, so it seems we will need to
>>  re-review a fresh backport from httpd trunk.
>
>It looks to me that we have lost our second of two chances to go
>through a stepwise, single-problem/single-solution approach to
>resolving the issues with this code, even after multiple comments
>stating that mixing that set of changes was undesired.  

The problem Jeff, is that you and Joe didn't state a specific
preference that 'I'm -1 to x and y, +1 to n and z'.  Patches are
a lousy method for incorporating layered multiple changes.  SVN
is a good method, and I've always been happy to commit these
fixes layer-by-layer as I'd done in trunk/.  I think we should
have followed Jim's sage advice and created a branch, and I'll
do so now.

>It isn't impossible to move forward from this point, but I don't 
>understand why we're still in big-patch mode after those previous 
>comments.

This is a fair question, so I'll turn it back around.  How wasn't
171205 a 'big patch' :-?  But in all seriousness...

As I reached the wrong conclusions on voting by following the
guidelines.html rather than voting.html, I'll put this back to
you; would you rather I recommit 171205 for you, or do you prefer
we look at a fresh backport.  I am fine with either way, and have
it fixed shortly.  It's totally up to you if you want to ack my
veto of the backport, or nak it and I'll undo the damage.  

Mi culpa,

Bill




Re: svn commit: r230592 - in /httpd/httpd/branches/2.0.x: CHANGES STATUS modules/proxy/proxy_http.c

2005-08-07 Thread William A. Rowe, Jr.
At 03:34 AM 8/7/2005, [EMAIL PROTECTED] wrote:

>Sorry for being confused, but I just want to understand the commit 
>policy/process on 2.0.x better.

See http://httpd.apache.org/dev/voting.html for the definitive
answer, and my post to Jeff Trawick shortly.  And don't do what
I did, which was try to use guidelines.html for reference :)

Bill




Re: [PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread r . pluem


Colm MacCarthaigh wrote:
> I finally developed some time to look into this. mod_cache doesn't
> behave very nicely when the cache area fills. Of course administators
> should make sure it doesn't fill in the first place, but nevertheless a
> few people have hit this bug (me included) and I think mod_cache should
> handle the problem gracefully.
> 
> Anyway, the problem occurs when the cache is unwritable, and mod_cache
> needs to revalidate a cached entity. cache_select_url handles this by
> rewriting headers_in to become a conditional request. However the code
> in cache_save_filter which turns the request back into its original
> (possibly unconditional) format is itself conditional on store_headers()
> working. 
> 
> The patch I've attached should be reasonably self-documenting, any
> questions - just ask. 
> 

As you already mentioned the remove_url implementation
of mod_disk_cache is currently an empty dummy :-).

I had a similar problem with 404 responses, and wrote a patch for this which is
currently in discussion (attached patch again to this mail):

http://mail-archives.apache.org/mod_mbox/httpd-dev/200507.mbox/[EMAIL PROTECTED]

It actually does implement a removal of the files in mod_disk_cache and
should also handle your problem. If it does not, I am pretty sure that a small 
modification
to the patch would do it.

I would really appreciate if you find some time to review my patch.

Thanks and regards

Rüdiger
Index: modules/cache/mod_mem_cache.c
===
--- modules/cache/mod_mem_cache.c   (Revision 220022)
+++ modules/cache/mod_mem_cache.c   (Arbeitskopie)
@@ -601,7 +601,7 @@
 /* remove_url()
  * Notes:
  */
-static int remove_url(const char *key) 
+static int remove_url(cache_handle_t *h, apr_pool_t *p) 
 {
 cache_object_t *obj;
 int cleanup = 0;
@@ -609,8 +609,8 @@
 if (sconf->lock) {
 apr_thread_mutex_lock(sconf->lock);
 }
-  
-obj = cache_find(sconf->cache_cache, key);   
+ 
+obj = h->cache_obj; 
 if (obj) {
 cache_remove(sconf->cache_cache, obj);
 /* For performance, cleanup cache object after releasing the lock */
Index: modules/cache/mod_cache.c
===
--- modules/cache/mod_cache.c   (Revision 220022)
+++ modules/cache/mod_cache.c   (Arbeitskopie)
@@ -29,6 +29,7 @@
  */
 static ap_filter_rec_t *cache_save_filter_handle;
 static ap_filter_rec_t *cache_out_filter_handle;
+static ap_filter_rec_t *cache_remove_url_filter_handle;
 
 /*
  * CACHE handler
@@ -123,6 +124,22 @@
 /* add cache_save filter to cache this request */
 ap_add_output_filter_handle(cache_save_filter_handle, NULL, r,
 r->connection);
+
+ap_log_error(APLOG_MARK, APLOG_DEBUG, APR_SUCCESS, r->server,
+  "Adding CACHE_REMOVE_URL filter.");
+
+/* 
+ * add cache_remove_url filter to this request to remove the
+ * cache entry if it is needed. Store the filter in the cache
+ * request rec for easy removal if it turns out that we do not
+ * need it, because we are caching it. Also put the current
+ * cache request rec in the filter context, as the request that
+ * is available later during running the filter maybe
+ * different due to an internal redirect.
+ */
+cache->cache_remove_url_filter = 
+   
ap_add_output_filter_handle(cache_remove_url_filter_handle, cache, r,
+   r->connection);
 }
 else if (cache->stale_headers) {
 ap_log_error(APLOG_MARK, APLOG_DEBUG, APR_SUCCESS, r->server,
@@ -436,11 +453,6 @@
 if (reason) {
 ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server,
  "cache: %s not cached. Reason: %s", url, reason);
-/* remove this object from the cache 
- * BillS Asks.. Why do we need to make this call to remove_url?
- * leave it in for now..
- */
-cache_remove_url(r, url);
 
 /* remove this filter from the chain */
 ap_remove_output_filter(f);
@@ -542,6 +554,15 @@
  "cache: Caching url: %s", url);
 
 /*
+ * We are actually caching this response. So it does not
+ * make sense to remove this entry in cache_remove_url_filter
+ * So remove it.
+ */ 
+ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server,
+ "cache: Removing CACHE_REMOVE_URL filter.");
+ap_remove_output_filter(cache->cache_remove_url_filter);
+
+/*
  * We now want to update the cache file header information with
  * the new date, last modified, expire and content length and write
  * it away to our cache file. First, we determine these values from
@

Re: svn commit: r230592 - in /httpd/httpd/branches/2.0.x: CHANGES STATUS modules/proxy/proxy_http.c

2005-08-07 Thread Joe Orton
On Sat, Aug 06, 2005 at 06:54:45PM -0500, William Rowe wrote:
> At 05:28 PM 8/6/2005, Joe Orton wrote:
> >That patch went through the normal 2.0.x review process and received 
> >three +1s and no vetoes.  You absolutely cannot come along a few months 
> >later and say "oh, actually, -1" and rip stuff out that you now decide 
> >you don't like.
> 
> It received 3 +1 votes, a slim review.  It was never released, 
> so it's not in fact 'done'.  If unreleased changes are incorrect, 
> they need to be fixed, or needs to be reverted.

If you now think the changes are incorrect then you need to go through 
the review process to correct them.  We've done this before.

> >  You missed the chance to veto
> 
> How so?
> 
> You can't veto a release.  You can veto code; certainly if there
> is a 'deadline' it doesn't start until we begin talking about 
> released code, and that isn't the case here.

No, you can't veto "code".  You vote on *changes to the code*.  That's 
what we've been doing for the last N years with 2.0.  That's how the 
previous state of the 2.0.x branch was obtained.  Again, if you think 
that the tree should be reverted to an older state, then you need to go 
through the normal process.

> > -- if you want to change 
> >the state of the 2.0.x tree now then you need to go through 
> >the review process like everyone else does.
> 
> I'll respectfully disagree, but I have to ask...

You're making a complete mockery of the time and effort expended by 
those who maintain the 2.0.x tree.  Please restore the 2.0.x tree to the 
state which was attained through the normal voting process by the 
committers, and stop arguing the toss.  Then follow the process like 
everyone else does to try and move *forward*, not backward.

I hope I speak for all the committers here.  If anyone thinks this 
request is out of line, please speak up.

> Why do you bring this up now when I mentioned that I had vetoed
> the change a good three weeks ago, in STATUS, and advised on
> list that it would be reverted?  

Because you putting random crap in STATUS is meaningless.  The R-T-C 
process under which the 2.0.x tree is maintained is not.

joe


Adding OID group support to mod_ssl

2005-08-07 Thread Dirk-Willem van Gulik

Martin, David,

See below a patch which now works with multiple group membership. That is
IMHO as far as apache should go - anything beyond this should really be
done through OpenSSL and some ASN1 'format' string passed along. Once you
start looping through more complex sets one finds that mod_auth_svn
requests more than one OID over time; or several times for the same one
within local redirects. So at some point caching this data may make sense.

Dw.

Index: ssl_expr_eval.c
===
--- ssl_expr_eval.c (revision 226665)
+++ ssl_expr_eval.c (working copy)
@@ -40,6 +40,54 @@
 static char *ssl_expr_eval_func_file(request_rec *, char *);
 static int   ssl_expr_eval_strcmplex(char *, char *);

+#define AP_ASN1_ISPRINTABLE(x) (\
+   (x) == V_ASN1_IA5STRING ||\
+   (x) == V_ASN1_T61STRING || \
+   (x) == V_ASN1_PRINTABLESTRING || \
+   (x) == V_ASN1_UTF8STRING)
+
+/* Perl code to generate groups or just single strings..
+
+#!/usr/bin/perl
+#
+use Convert::ASN1;
+use strict;
+$|++;
+
+my @groups = @ARGV
+or die "Syntax: $0  ...\n";
+
+my $asn = Convert::ASN1->new;
+
+# The difference between SEQUENCE and SET is in the order of transmission
+# of the fields: for SEQUENCE, a sender is required to transmit them in
+# the order listed in the notation; for SET, the order of transmission is
+# an implementation option for the sender. The mod_ssl module detects
+# both types.
+#
+my $bytes='';
+if ($#groups) {
+   $asn->prepare('str SET OF STRING'); # we're not order
sensitive.
+   $bytes = $asn->encode(str => [EMAIL PROTECTED])
+   or die $!;
+} else {
+   $asn->prepare('str STRING');
+   $bytes = $asn->encode(str => $groups[0])
+   or die $!;
+}
+
+my $bytes = $asn->encode(str => [EMAIL PROTECTED])
+or die $!;
+
+print "Line to include in OPENSSL config:\n";
+
+print "DER";
+map { printf ":%02X",$_; } unpack('C*', $bytes);print "\n";
+
+exit 0;
+
+*/
+
 BOOL ssl_expr_eval(request_rec *r, ssl_expr *node)
 {
 switch (node->node_op) {
@@ -199,7 +247,6 @@
 }

 #define NUM_OID_ELTS 8 /* start with 8 oid slots, resize when needed */
-
 apr_array_header_t *ssl_extlist_by_oid(request_rec *r, const char
*oidstr)
 {
 int count = 0, j;
@@ -229,7 +276,28 @@
 /* Loop over all extensions, extract the desired oids */
 for (j = 0; j < count; j++) {
 X509_EXTENSION *ext = X509_get_ext(xs, j);
+#if 0
+   {
+   char buff[16*1024];
+BUF_MEM *buf;
+   BIO *bio = BIO_new(BIO_s_mem());
+   OBJ_obj2txt(buff, sizeof(buff), ext->object, 0);

+   if (X509V3_EXT_print(bio, ext, /* X509V3_EXT_ERROR_UNKNOWN */ 
X509V3_EXT_PARSE_UNKNOWN /*  X509V3_EXT_DUMP_UNKNOWN */, 0) == 1) {
+   BIO_get_mem_ptr(bio, &buf);
+
+   /* XXX for some reason the PARSE_UNK do not have a 
trailing \0 */
+   buf->data[ buf->length -1 ] = 0;
+
+   ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server, 
"Extension '%s': %s", buff,buf->data);
+   };
+   BIO_vfree(bio);
+
+   };
+#endif
+/* XXX not the most efficient way of doing this - we propably want to cache
+ * the strings extracted for repeated lookups on new oidstr's.
+ */
 if (OBJ_cmp(ext->object, oid) == 0) {
 BIO *bio = BIO_new(BIO_s_mem());

@@ -238,13 +306,38 @@
 char **new = apr_array_push(val_array);

 BIO_get_mem_ptr(bio, &buf);
-
 *new = apr_pstrdup(r->pool, buf->data);
-}
-
+   ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server,
+   "X509v3 extension %s == '%s' found.", oidstr, *new);
+} else
+   /* The above X509V3_EXT_print() only captures OID's which are a) 
hardcoded in openssl its objects.txt
+ * file, b) referenced in the asn1 parsing and c) listed as valid 
in the 509v3 extension code. Below
+* we simply also accept any fields which have a normalish string 
in them.
+ */
+   if (AP_ASN1_ISPRINTABLE(ext->value->data[0])) {
+   char **new = apr_array_push(val_array);
+*new = apr_pstrmemdup(r->pool, &(ext->value->data[2]), 
ext->value->data[1]);
+   ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r->server,
+   "Raw X509v3 extension %s == <%s> found in client 
certificate", oidstr, *new);
+   } else
+   if ((ext->value->data[0] == V_ASN1_SET || ext->value->data[0] == 
V_ASN1_SEQUENCE) &&
+   (ext->value->data[1]>3) && 
(AP_ASN1_ISPRINTABLE(ext->value->data[2])))
+   {
+   int len = ext->value->data[1];
+   int i = 2;
+   while(i < len) {
+   if (AP_ASN1_ISPRINTABLE(ext->value->data[i])) {
+   char **new = apr_array_push(val

[PATCH] fix incorrect 304's responses when cache is unwritable

2005-08-07 Thread Colm MacCarthaigh

I finally developed some time to look into this. mod_cache doesn't
behave very nicely when the cache area fills. Of course administators
should make sure it doesn't fill in the first place, but nevertheless a
few people have hit this bug (me included) and I think mod_cache should
handle the problem gracefully.

Anyway, the problem occurs when the cache is unwritable, and mod_cache
needs to revalidate a cached entity. cache_select_url handles this by
rewriting headers_in to become a conditional request. However the code
in cache_save_filter which turns the request back into its original
(possibly unconditional) format is itself conditional on store_headers()
working. 

The patch I've attached should be reasonably self-documenting, any
questions - just ask. 

-- 
Colm MacCárthaighPublic Key: [EMAIL PROTECTED]
Index: mod_cache.c
===
--- mod_cache.c (revision 230608)
+++ mod_cache.c (working copy)
@@ -666,7 +666,13 @@
 ap_cache_accept_headers(cache->handle, r, 1);
 }
 
-/* Write away header information to cache. */
+/* Write away header information to cache. It is possible that we are
+ * trying to update headers for an entity which has already been cached.
+ * 
+ * This may fail, due to an unwritable cache area. E.g. filesystem full,
+ * permissions problems or a read-only (re)mount. This must be handled 
+ * later. 
+ */
 rv = cache->provider->store_headers(cache->handle, r, info);
 
 /* Did we just update the cached headers on a revalidated response?
@@ -675,7 +681,7 @@
  * the same way as with a regular response, but conditions are now checked
  * against the cached or merged response headers.
  */
-if (rv == APR_SUCCESS && cache->stale_handle) {
+if (cache->stale_handle) {
 apr_bucket_brigade *bb;
 apr_bucket *bkt;
 int status;
@@ -699,12 +705,42 @@
 }
 
 cache->block_response = 1;
+
+/* Before returning we need to handle the possible case of an
+ * unwritable cache. Rather than leaving the entity in the cache
+ * and having it constantly re-validated, now that we have recalled 
+ * the body it is safe to try and remove the url from the cache.
+ */
+if (rv != APR_SUCCESS) {
+ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server,
+ "cache: updating headers with store_headers failed. "
+ "Removing cached url.");
+
+if (cache->provider->remove_url(url) != OK) {
+/* Probably a mod_disk_cache cache area has been (re)mounted 
+ * read-only, or that there is a permissions problem. 
+ *
+ * XXX: right now mod_disk_cache's remove_url doesn't do
+ * anything and always returns OK. Once it does, this codepath 
+ * will make more sense. 
+ */
+ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server,
+ "cache: attempt to remove url from cache unsuccessful.");
+}
+}
+
 return ap_pass_brigade(f->next, bb);
 }
+  
+if(rv != APR_SUCCESS) {
+ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server,
+ "cache: store_headers failed");
+ap_remove_output_filter(f);
 
-if (rv == APR_SUCCESS) {
-rv = cache->provider->store_body(cache->handle, r, in);
+return ap_pass_brigade(f->next, in);
 }
+
+rv = cache->provider->store_body(cache->handle, r, in);
 if (rv != APR_SUCCESS) {
 ap_log_error(APLOG_MARK, APLOG_DEBUG, rv, r->server,
  "cache: store_body failed");


Re: [PATCH] Improved doxygen output for http_connection.h

2005-08-07 Thread Neale Ranns





Ian,

The package keyword is for declaring java packages and results in an entry in the list of namespaces in the apache project. To get an entry in the modules page we need to use the defgroup/ingroup keywords.

The deffunc keyword is now deprecated. Looking through the manual i could not find a replacement. I'm guessing the doxygen preprocessing macro expansions make it unnecessary, since it no longer gets confused with the AP_DECLARE(x) etc.

i hope that answers your questions.

neale

On Sat, 2005-08-06 at 01:40, Ian Holsman wrote:

Hi Neale
what does removing the package line do?

doesn't it remove the function from their grouping on the modules page?
and the removal of the deffunc prototype... why?

Neale Ranns wrote:
> hi,
> 
> I'm new to the project and have been reading the code to see how it all
> works and i've been fixing the doxygen (version 1.4.4) tags as i go.
> 
> attached a patch for http_connection.h based on the version on the trunk
> i checked out from subversion this morning.
> 
> If it's useful and you'd like more let me know. If someone else is
> already working on this, then I'll just read.
> 
> thanks
> 
> neale
> 
> 
> 
> 
> 
> Index: include/http_connection.h
> ===
> --- include/http_connection.h	(revision 227319)
> +++ include/http_connection.h	(working copy)
> @@ -14,6 +14,11 @@
>   * limitations under the License.
>   */
>  
> +/**
> + * @file  http_connection.h
> + * @brief Apache connection library
> + */
> +
>  #ifndef APACHE_HTTP_CONNECTION_H
>  #define APACHE_HTTP_CONNECTION_H
>  
> @@ -25,9 +30,6 @@
>  extern "C" {
>  #endif
>  
> -/**
> - * @package Apache connection library
> - */
>  #ifdef CORE_PRIVATE
>  /**
>   * This is the protocol module driver.  This calls all of the
> @@ -36,10 +38,13 @@
>   * @param csd The mechanism on which this connection is to be read.  
>   *Most times this will be a socket, but it is up to the module
>   *that accepts the request to determine the exact type.
> - * @deffunc void ap_process_connection(conn_rec *c, void *csd)
>   */
>  AP_CORE_DECLARE(void) ap_process_connection(conn_rec *c, void *csd);
>  
> +/**
> + * Flushes all remain data in the client send buffer
> + * @param c The connection to flush
> + */
>  AP_CORE_DECLARE(void) ap_flush_conn(conn_rec *c);
>  
>  /**
> @@ -70,10 +75,12 @@
>   * if it encounters a fatal error condition.
>   *
>   * @param p The pool from which to allocate the connection record
> + * @param server The server record to create the connection too. 
>   * @param csd The socket that has been accepted
>   * @param conn_id A unique identifier for this connection.  The ID only
>   *needs to be unique at that time, not forever.
>   * @param sbh A handle to scoreboard information for this connection.
> + * @param alloc The bucket allocator to use for all bucket/brigade creations
>   * @return An allocated connection record or NULL.
>   */
>  AP_DECLARE_HOOK(conn_rec *, create_connection,
> @@ -89,7 +96,6 @@
>   *Most times this will be a socket, but it is up to the module
>   *that accepts the request to determine the exact type.
>   * @return OK or DECLINED
> - * @deffunc int ap_run_pre_connection(conn_rec *c, void *csd)
>   */
>  AP_DECLARE_HOOK(int,pre_connection,(conn_rec *c, void *csd))
>  
> @@ -100,12 +106,10 @@
>   * to handle the request is the last module run.
>   * @param c The connection on which the request has been received.
>   * @return OK or DECLINED
> - * @deffunc int ap_run_process_connection(conn_rec *c)
>   */
>  AP_DECLARE_HOOK(int,process_connection,(conn_rec *c))
>  
> -/* End Of Connection (EOC) bucket */
> -
> +/** End Of Connection (EOC) bucket */
>  AP_DECLARE_DATA extern const apr_bucket_type_t ap_bucket_type_eoc;
>  
>  /**
> @@ -119,7 +123,6 @@
>   * Make the bucket passed in an End Of Connection (EOC) bucket
>   * @param b The bucket to make into an EOC bucket
>   * @return The new bucket, or NULL if allocation failed
> - * @deffunc apr_bucket *ap_bucket_eoc_make(apr_bucket *b)
>   */
>  AP_DECLARE(apr_bucket *) ap_bucket_eoc_make(apr_bucket *b);
>  
> @@ -128,7 +131,6 @@
>   * that the connection will be closed.
>   * @param list The freelist from which this bucket should be allocated
>   * @return The new bucket, or NULL if allocation failed
> - * @deffunc apr_bucket *ap_bucket_eoc_create(apr_bucket_alloc_t *list)
>   */
>  AP_DECLARE(apr_bucket *) ap_bucket_eoc_create(apr_bucket_alloc_t *list);
>  






Re: svn commit: r230592 - in /httpd/httpd/branches/2.0.x: CHANGES STATUS modules/proxy/proxy_http.c

2005-08-07 Thread r . pluem


William A. Rowe, Jr. wrote:
> At 05:28 PM 8/6/2005, Joe Orton wrote:
> 
>>On Sat, Aug 06, 2005 at 09:29:13PM -, William Rowe wrote:
>>
>>>Author: wrowe
>>>Date: Sat Aug  6 14:29:05 2005
>>>New Revision: 230592
>>>
>>>URL: http://svn.apache.org/viewcvs?rev=230592&view=rev
>>>Log:
>>>
>>>  As much as it pains me, seriously, it seems that reviewing the re-backport
>>>  of this code was too illegible for review, so it seems we will need to
>>>  re-review a fresh backport from httpd trunk.  
>>
>>That patch went through the normal 2.0.x review process and received 
>>three +1s and no vetoes.  You absolutely cannot come along a few months 
>>later and say "oh, actually, -1" and rip stuff out that you now decide 
>>you don't like.
> 
> 
> It received 3 +1 votes, a slim review.  It was never released, 
> so it's not in fact 'done'.  If unreleased changes are incorrect, 
> they need to be fixed, or needs to be reverted.
> 

Sorry for being confused, but I just want to understand the commit 
policy/process on 2.0.x better.
As far as I understood this right now it works basicly this way:

1. A change to 2.0.x is proposed.
2. It gets 3 binding +1 and no binding -1.
3. The change is commited in subversion.

If some person (with or without binding vote) thinks that the change is -1 in 
its opinion
after 3. has been executed the process starts from the scratch and reverting 
this change
follows the same process as any change to 2.0.x and you have to go thru 1. - 3. 
to get this
reverting done on the 2.0.x branch.

Please advice me if I mixed something up. I just want to understand these 
things.

Regards

Rüdiger

[..cut..]


Re: [SubPatches] httpd-2.0.54-proxy-request.patch

2005-08-07 Thread William A. Rowe, Jr.
At 09:53 PM 8/6/2005, William A. Rowe, Jr. wrote:
>   (don't set up else code blocks following an if case which
>   results in an absolute 'return', especially when we were
>   breaking 80-col limits left and right)

Ignore... sorry, that referred to a bit of code that no longer
is an issue.