Re: Using gzip and CustomLog
Thanks Rainer. On Sun, Feb 8, 2009 at 8:50 PM, Rainer Jung wrote: > On 28.01.2009 06:50, Paras Fadte wrote: >> >> I have somewhat modified the rotatlogs utility to support compression >> . Although it creates files in compressed format (.gz) and rotates >> them properly the issue that i am facing is that when apache is >> restarted (graceful or stop/start way ) the last created compressed >> file doesn't seem to get closed . Is there a way to rectify this ? >> For compression I am using zlib . > > When httpd is restarted or stopped, then most rotatelogs processes get > stopped via a signal. The signal could depend on the platform, but in my > case it's SIGTERM. You can "truss" your rotatelogs to verify yourself, > whether that's true for your Linux system too. truss will show you the > signal the rotatelogs process received before terminating. > > Then you need to register a signal handler, e.g. > > apr_signal(SIG_TERM, my_signal_handler); > > which gets called automatically, whenever the process receives the > respective signal. Your signal handler my_signal_handler() could then set an > internal flag, indicating that you want to cleanup and exit rotatelogs. > > You can check this flag before and after the blocking read from the log > pipe, and if it is set, close your gzip output cleanly and exit rotatelogs. > > You can temporarily deactivate or activate all signal handlers for SIG_TERM > with > > apr_signal_unblock(SIG_TERM); > > and > > apr_signal_block(SIG_TERM); > > The ErrorLog for the global server behaves a little different, when > restarting Apache it doesn't get a signal but instead it will get an EPIPE > when trying to read from the log pipe. > > Regards, > > Rainer > >> On Fri, Jan 23, 2009 at 1:41 PM, Paras Fadte wrote: >>> >>> Thanks Rainer, >>> >>> yeah.. me not a pro at development . >>> >>> On Fri, Jan 23, 2009 at 1:30 PM, Rainer Jung >>> wrote: On 23.01.2009 08:45, Paras Fadte wrote: > > Can you please tell me in which file ? I assume you are building rotatelogs from within the httpd sources. There is a file support/Makefile, which contains a line $(LINK) $(rotatelogs_LTFLAGS) $(rotatelogs_OBJECTS) $(PROGRAM_LDADD) Simply add "-lz" at the end of the line: $(LINK) $(rotatelogs_LTFLAGS) $(rotatelogs_OBJECTS) $(PROGRAM_LDADD) -lz In case you don't know what a Makefile is and how it basically works, you need to read about how to do C software development. Regards, Rainer > On Fri, Jan 23, 2009 at 1:09 PM, Rainer Jung > wrote: >> >> On 23.01.2009 07:55, Paras Fadte wrote: >>> >>> Hi, >>> >>> I get following error when I try to use "compress" function of zlib >>> in >>> "rotatelogs.c" . I have included "zlib.h" in rotatelogs.c . >>> >>> /home/paras/httpd-2.0.55/support/rotatelogs.c:294: undefined >>> reference >>> to `compress' >>> collect2: ld returned 1 exit status >>> >>> Is it linking error ? where should I make the changes to eliminate >>> this >>> error? >> >> Add -lz to the linking flags. >
Re: Using gzip and CustomLog
Hi Rainer, I have attached the modified "rotatelogs.c" file (originally taken from apache 2.0.55 ) . Can you please have a look at it and let me know its shortcomings and chances that it could seg fault ? Thanks in advance. -Paras On Tue, Feb 10, 2009 at 1:37 PM, Paras Fadte wrote: > Thanks Rainer. > > On Sun, Feb 8, 2009 at 8:50 PM, Rainer Jung wrote: >> On 28.01.2009 06:50, Paras Fadte wrote: >>> >>> I have somewhat modified the rotatlogs utility to support compression >>> . Although it creates files in compressed format (.gz) and rotates >>> them properly the issue that i am facing is that when apache is >>> restarted (graceful or stop/start way ) the last created compressed >>> file doesn't seem to get closed . Is there a way to rectify this ? >>> For compression I am using zlib . >> >> When httpd is restarted or stopped, then most rotatelogs processes get >> stopped via a signal. The signal could depend on the platform, but in my >> case it's SIGTERM. You can "truss" your rotatelogs to verify yourself, >> whether that's true for your Linux system too. truss will show you the >> signal the rotatelogs process received before terminating. >> >> Then you need to register a signal handler, e.g. >> >> apr_signal(SIG_TERM, my_signal_handler); >> >> which gets called automatically, whenever the process receives the >> respective signal. Your signal handler my_signal_handler() could then set an >> internal flag, indicating that you want to cleanup and exit rotatelogs. >> >> You can check this flag before and after the blocking read from the log >> pipe, and if it is set, close your gzip output cleanly and exit rotatelogs. >> >> You can temporarily deactivate or activate all signal handlers for SIG_TERM >> with >> >> apr_signal_unblock(SIG_TERM); >> >> and >> >> apr_signal_block(SIG_TERM); >> >> The ErrorLog for the global server behaves a little different, when >> restarting Apache it doesn't get a signal but instead it will get an EPIPE >> when trying to read from the log pipe. >> >> Regards, >> >> Rainer >> >>> On Fri, Jan 23, 2009 at 1:41 PM, Paras Fadte wrote: Thanks Rainer, yeah.. me not a pro at development . On Fri, Jan 23, 2009 at 1:30 PM, Rainer Jung wrote: > > On 23.01.2009 08:45, Paras Fadte wrote: >> >> Can you please tell me in which file ? > > I assume you are building rotatelogs from within the httpd sources. > > There is a file support/Makefile, which contains a line > > $(LINK) $(rotatelogs_LTFLAGS) $(rotatelogs_OBJECTS) $(PROGRAM_LDADD) > > Simply add "-lz" at the end of the line: > > $(LINK) $(rotatelogs_LTFLAGS) $(rotatelogs_OBJECTS) $(PROGRAM_LDADD) -lz > > In case you don't know what a Makefile is and how it basically works, > you > need to read about how to do C software development. > > Regards, > > Rainer > >> On Fri, Jan 23, 2009 at 1:09 PM, Rainer Jung >> wrote: >>> >>> On 23.01.2009 07:55, Paras Fadte wrote: Hi, I get following error when I try to use "compress" function of zlib in "rotatelogs.c" . I have included "zlib.h" in rotatelogs.c . /home/paras/httpd-2.0.55/support/rotatelogs.c:294: undefined reference to `compress' collect2: ld returned 1 exit status Is it linking error ? where should I make the changes to eliminate this error? >>> >>> Add -lz to the linking flags. >> > rotatelogs.c Description: Binary data
AuthLDAPCharsetConfig considered harmful
The AuthLDAPCharsetConfig directive allows server admins to do charset conversion of the username passed in the HTTP auth headers. RFC 2617 does not specify use of encoding non-ASCII usernames in the {Proxy-},Authorization request headers; mod_authnz_ldap is guessing an encoding based on any Accept-Language header in the request. Given that use of non-ASCII in HTTP authz is not specified by RFC, this is: a) imposing a defacto standard, and b) setting an false expectation that use of non-ASCII usernames will actually work with HTTP, and c) not going to work in practice, as I just had a user complain. So it seems like a bad idea all round. Am I missing anything? Regards, Joe
Re: AuthLDAPCharsetConfig considered harmful
On Tue, Feb 10, 2009 at 8:45 AM, Joe Orton wrote: > The AuthLDAPCharsetConfig directive allows server admins to do charset > conversion of the username passed in the HTTP auth headers. > > RFC 2617 does not specify use of encoding non-ASCII usernames in the > {Proxy-},Authorization request headers; mod_authnz_ldap is guessing an > encoding based on any Accept-Language header in the request. Given that > use of non-ASCII in HTTP authz is not specified by RFC, this is: Isn't it encoding agnostic, with the exception of ascii control characters? > a) imposing a defacto standard, and I had assumed it was compiled from browser observation, which would make it a little more reactionary than it's painted here. > b) setting an false expectation that use of non-ASCII usernames will > actually work with HTTP, and I agree that this partial/fuzzy solution is costly in terms of support > c) not going to work in practice, as I just had a user complain. When I looked at it, I thought it minimally needed to know user-agent details and possibly some heurisitc to double-check the utf8-or-local-codepage guess. For example my notes imply the current scheme would have trouble with recent Opera releases, which favor utf-8 for the encoding of the basic auth credentials. > > So it seems like a bad idea all round. Am I missing anything? > IMO makes sense at the very least to call out that it's a heuristic that shouldn't be relied upon. Being influenced by e.g. BrowserMatch, or the presence of certain sequences provided by the user would go a long way in at least helping a savvy administrator accomodate the unpredictable incoming charset. -- Eric Covener cove...@gmail.com
Re: AuthLDAPCharsetConfig considered harmful
On Tue, Feb 10, 2009 at 09:52:43AM -0500, Eric Covener wrote: > On Tue, Feb 10, 2009 at 8:45 AM, Joe Orton wrote: > > The AuthLDAPCharsetConfig directive allows server admins to do charset > > conversion of the username passed in the HTTP auth headers. > > > > RFC 2617 does not specify use of encoding non-ASCII usernames in the > > {Proxy-},Authorization request headers; mod_authnz_ldap is guessing an > > encoding based on any Accept-Language header in the request. Given that > > use of non-ASCII in HTTP authz is not specified by RFC, this is: > > Isn't it encoding agnostic, with the exception of ascii control characters? That's probably a better way to put it, yes. > > a) imposing a defacto standard, and > > I had assumed it was compiled from browser observation, which would > make it a little more reactionary than it's painted here. OK, even worse choice of language there, including the spelling. But that is the point: it's extrapolating from the behaviour of a couple of browsers, rather than following any RFC... ... > For example my notes imply the current scheme would have trouble with > recent Opera releases, which favor utf-8 for the encoding of the basic > auth credentials. ... and hence leads to interop failure. > Being influenced by e.g. BrowserMatch, or the presence of certain > sequences provided by the user would go a long way in at least helping > a savvy administrator accomodate the unpredictable incoming charset. I think it would be better to simply advise against use of non-ASCII usernames in the docs. Regards, Joe
quickhandler hook: what is "lookup" for?
Hi all, According to the method signature for the quick_handler hook, an int field called "lookup" is passed. According to the API docs, the "lookup" field is described as: "Controls whether the caller actually wants content or not. lookup is set when the quick_handler is called out of ap_sub_req_lookup_uri()". This description doesn't tell me exactly what lookup means: does (lookup == 1) mean the caller wants content? Does (lookup == 0) mean the caller want's content? Looking at ap_sub_req_method_uri(), which is used to set up a subrequest (but not necessarily run it yet or at all), lookup is set to 1. if (next_filter) { res = ap_run_quick_handler(rnew, 1); } This would imply to me that (lookup == 1) means the caller *doesn't* want content. Looking at ap_run_sub_req(), which is used to actually run the request as set up by ap_sub_req_method_uri(), the quick handler is called again, with the lookup set to 0: retval = ap_run_quick_handler(r, 0); This would imply to me that (lookup == 0) means the caller *does* want content. However, in ap_run_sub_req(), we only run the quick handler if the content *isn't* a file or directory on disk: if (!(r->filename && r->finfo.filetype)) { retval = ap_run_quick_handler(r, 0); } Why does ap_run_sub_req() care whether the request is represented by a file or directory on disk? To describe the problem I am trying to solve: mod_cache refuses to cache the result of subrequests, and this is happening because the quick_handler is not running on subrequests with a lookup value of zero. Would it be correct to run the quick handler for all requests, like so? AP_DECLARE(int) ap_run_sub_req(request_rec *r) { int retval = DECLINED; /* Run the quick handler if the subrequest is not a dirent or file * subrequest */ -if (!(r->filename && r->finfo.filetype)) { - retval = ap_run_quick_handler(r, 0); +retval = ap_run_quick_handler(r, 0); -} if (retval != OK) { retval = ap_invoke_handler(r); if (retval == DONE) { retval = OK; } } ap_finalize_sub_req_protocol(r); return retval; } Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Re: CacheIgnoreHeaders not working correctly
Lars Eilebrecht wrote: [...] > So it copies r->headers_out to the local headers_out variable, and > removes all unwanted headers. However, then r->err_headers_out > gets merged into headers_out which is then stored in the cache. > > Is there a reason why this is done? This could lead to quite a > number of headers being stored in the cache such as Set-Cookie. > Which happens in my case as the custom module operates on > r->err_headers_out. > > So a potential fix would be to merge r->headers_out and > r->err_headers_out into the local headers_out variable, then > filter the unwanted headers, and store the result. > > This seems to work, but maybe I'm missing something. Anyone any comments about this patch? It fixes the issue, but I'm not 100% if I may be missing something regarding the handling of err_headers_out in mod_disk_cache. --snip-- --- mod_disk_cache.c.orig 2009-02-10 11:08:41.0 + +++ mod_disk_cache.c2009-02-10 10:47:48.0 + @@ -912,7 +912,9 @@ if (r->headers_out) { apr_table_t *headers_out; -headers_out = ap_cache_cacheable_hdrs_out(r->pool, r->headers_out, +headers_out = apr_table_overlay(r->pool, r->headers_out, +r->err_headers_out); +headers_out = ap_cache_cacheable_hdrs_out(r->pool, headers_out, r->server); if (!apr_table_get(headers_out, "Content-Type") @@ -921,8 +923,6 @@ ap_make_content_type(r, r->content_type)); } -headers_out = apr_table_overlay(r->pool, headers_out, -r->err_headers_out); rv = store_table(dobj->hfd, headers_out); if (rv != APR_SUCCESS) { return rv; --snip-- ciao... -- Lars Eilebrecht l...@eilebrecht.net
[OT] Looking for Apache module-development sidework?
Excuse the off-topic post: With my SpringSource hat on, I'm looking for someone who has availability this month to be a primary developer on a module for Apache 2.2... In general terms, it is a "fair use" module that tracks usage and allows differing levels of access (more requests, etc...) for different users and groups (eg: a registered user can request more resources; a public, anonymous user only a few; a blacklisted user gets dropped on the floor). I can be more specific to those who have questions ;) This module will eventually be open-sourced and donated to the ASF. Email me at my jim.jagiel...@springsource.com address if interested... tia
cache POST requests
Hello, I'm using apache 2.2.11 on centos 5/x86_64 I'm testing out caching data for GET requests using mod_disk_cache, which I have working. I'd also like to cache data for the same requests via the POST method, but this doesn't seem to work. Is this supported? If so, is there any config changes required for this to work? If not, is this feature planned? Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com
RE: cache POST requests
You really shouldn't be trying to cache responses to post requests. Completely from memory, but the HTTP spec says not to cache post responses. The URI is the base key to any caching implementations (with the addition of a select few vary headers, etc.), and your post data really doesn't factor in. The normal pattern to use in most of these situations is http://en.wikipedia.org/wiki/Post/Redirect/Get. Think of a post as a submission from the client. Once you have that submission, just tell the client where to get the appropriate resource with a GET request and leave the heavy lifting/caching until that request comes in. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:25 PM To: us...@httpd.apache.org Cc: modules-...@httpd.apache.org Subject: cache POST requests Hello, I'm using apache 2.2.11 on centos 5/x86_64 I'm testing out caching data for GET requests using mod_disk_cache, which I have working. I'd also like to cache data for the same requests via the POST method, but this doesn't seem to work. Is this supported? If so, is there any config changes required for this to work? If not, is this feature planned? Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com
RE: cache POST requests
I read that for the 1.0 spec, but thought for the 1.1 it was possible with the proper expiration headers. Although I do understand the keying problem. My problem is that my POSTs vary wildly in size from 5k to over a meg, and avg. out to about 45k. Being that GETs in apache by default are limited to 8k, I'll get a 414 error, so I'm not sure where I can turn to cache this. I suppose I can up the LimitRequestLine parameter to the max I need, but I'm not sure how kosher that is. Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com -Original Message- From: Houser, Rick [mailto:houser.r...@aoins.com] Sent: Tuesday, February 10, 2009 11:37 AM To: modules-...@httpd.apache.org; us...@httpd.apache.org Subject: RE: cache POST requests You really shouldn't be trying to cache responses to post requests. Completely from memory, but the HTTP spec says not to cache post responses. The URI is the base key to any caching implementations (with the addition of a select few vary headers, etc.), and your post data really doesn't factor in. The normal pattern to use in most of these situations is http://en.wikipedia.org/wiki/Post/Redirect/Get. Think of a post as a submission from the client. Once you have that submission, just tell the client where to get the appropriate resource with a GET request and leave the heavy lifting/caching until that request comes in. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:25 PM To: us...@httpd.apache.org Cc: modules-...@httpd.apache.org Subject: cache POST requests Hello, I'm using apache 2.2.11 on centos 5/x86_64 I'm testing out caching data for GET requests using mod_disk_cache, which I have working. I'd also like to cache data for the same requests via the POST method, but this doesn't seem to work. Is this supported? If so, is there any config changes required for this to work? If not, is this feature planned? Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com
RE: cache POST requests
You mean post REQUESTS, not RESPONSES, correct? GET requests shouldn't be very large, but it's not all that uncommon to have GET responses larger than 1GB (local LANS, etc.). Accept all the incomming data on a post (which could be 1+MB file attachments, etc.), generate a unique URL, and redirect the user there to fetch the result. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:52 PM To: modules-...@httpd.apache.org Subject: RE: cache POST requests I read that for the 1.0 spec, but thought for the 1.1 it was possible with the proper expiration headers. Although I do understand the keying problem. My problem is that my POSTs vary wildly in size from 5k to over a meg, and avg. out to about 45k. Being that GETs in apache by default are limited to 8k, I'll get a 414 error, so I'm not sure where I can turn to cache this. I suppose I can up the LimitRequestLine parameter to the max I need, but I'm not sure how kosher that is. Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com -Original Message- From: Houser, Rick [mailto:houser.r...@aoins.com] Sent: Tuesday, February 10, 2009 11:37 AM To: modules-...@httpd.apache.org; us...@httpd.apache.org Subject: RE: cache POST requests You really shouldn't be trying to cache responses to post requests. Completely from memory, but the HTTP spec says not to cache post responses. The URI is the base key to any caching implementations (with the addition of a select few vary headers, etc.), and your post data really doesn't factor in. The normal pattern to use in most of these situations is http://en.wikipedia.org/wiki/Post/Redirect/Get. Think of a post as a submission from the client. Once you have that submission, just tell the client where to get the appropriate resource with a GET request and leave the heavy lifting/caching until that request comes in. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:25 PM To: us...@httpd.apache.org Cc: modules-...@httpd.apache.org Subject: cache POST requests Hello, I'm using apache 2.2.11 on centos 5/x86_64 I'm testing out caching data for GET requests using mod_disk_cache, which I have working. I'd also like to cache data for the same requests via the POST method, but this doesn't seem to work. Is this supported? If so, is there any config changes required for this to work? If not, is this feature planned? Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com
Re: svn commit: r742992 - in /httpd/httpd/trunk: include/ap_slotmem.h modules/mem/mod_plainmem.c modules/mem/mod_sharedmem.c server/slotmem.c
On 02/10/2009 04:16 PM, j...@apache.org wrote: > Author: jim > Date: Tue Feb 10 15:16:24 2009 > New Revision: 742992 > > URL: http://svn.apache.org/viewvc?rev=742992&view=rev > Log: > Add getter/setter functions to the slotmem API. Also, > reset the id vars to unsigned ints universally. > > Modified: > httpd/httpd/trunk/include/ap_slotmem.h > httpd/httpd/trunk/modules/mem/mod_plainmem.c > httpd/httpd/trunk/modules/mem/mod_sharedmem.c > httpd/httpd/trunk/server/slotmem.c > > Modified: httpd/httpd/trunk/include/ap_slotmem.h > URL: > http://svn.apache.org/viewvc/httpd/httpd/trunk/include/ap_slotmem.h?rev=742992&r1=742991&r2=742992&view=diff > == > --- httpd/httpd/trunk/include/ap_slotmem.h (original) > +++ httpd/httpd/trunk/include/ap_slotmem.h Tue Feb 10 15:16:24 2009 > @@ -150,6 +150,34 @@ > return APR_SUCCESS; > } > > +static apr_status_t slotmem_get(ap_slotmem_t *slot, unsigned int id, > unsigned char *dest, apr_size_t dest_len) > +{ > + > +void *ptr; > +apr_status_t ret; > + > +ret = slotmem_mem(slot, id, &ptr); > +if (ret != APR_SUCCESS) { > +return ret; > +} > +memcpy(dest, ptr, dest_len); /* bounds check? */ > +return APR_SUCCESS; > +} > + > +static apr_status_t slotmem_put(ap_slotmem_t *slot, unsigned int id, > unsigned char *src, apr_size_t src_len) > +{ > + > +void *ptr; > +apr_status_t ret; > + > +ret = slotmem_mem(slot, id, &ptr); > +if (ret != APR_SUCCESS) { > +return ret; > +} > +memcpy(ptr, src, src_len); /* bounds check? */ > +return APR_SUCCESS; > +} > + > static const ap_slotmem_storage_method storage = { > "plainmem", > &slotmem_do, Why are put and get not added to storage? Regards RĂ¼diger
Logging bytes sent
It appears that %b logging of bytes sent can be wrong if something happens to the connection during the request processing. The number logged by mod_log_config is r->bytes_sent, which is computed in ap_content_length_filter(). If something goes wrong (maybe I pull Apache's network cable) while sending the response, the connection gets aborted, but r->bytes_sent isn't changed, and so the access log shows that the full length of the response was sent. You can add %X to the logging to see whether the connection was aborted, but the number of bytes sent logged is still wrong. I'm wondering if there's some good reason for this that I'm missing? Or maybe it's just an oversight I've noticed since I happen to be looking at that part of the code right now. Thanks, Dan
Re: CacheIgnoreHeaders not working correctly
On 02/09/2009 09:21 PM, Lars Eilebrecht wrote: > Hi, > > I have a question about the header handling logic of > mod_cache/mod_disk_cache. > > With an installation running mod_disk_cache and a custom module > which fiddles with Cookie and Set-Cookie headers I am running into > the problem that mod_disk_cache was storing Set-Cookie headers > in the cache. It is ignoring the "CacheIgnoreHeaders Set-Cookie". > > In mod_disk_cache's store_header() function we have this code: > > apr_table_t *headers_out; > > headers_out = ap_cache_cacheable_hdrs_out(r->pool, r->headers_out, > r->server); > [...] > headers_out = apr_table_overlay(r->pool, headers_out, > r->err_headers_out); > rv = store_table(dobj->hfd, headers_out); > > So it copies r->headers_out to the local headers_out variable, and > removes all unwanted headers. However, then r->err_headers_out > gets merged into headers_out which is then stored in the cache. > > Is there a reason why this is done? This could lead to quite a > number of headers being stored in the cache such as Set-Cookie. > Which happens in my case as the custom module operates on > r->err_headers_out. > > So a potential fix would be to merge r->headers_out and > r->err_headers_out into the local headers_out variable, then > filter the unwanted headers, and store the result. > > This seems to work, but maybe I'm missing something. Have a look at http://svn.apache.org/viewvc?view=rev&revision=649162 http://svn.apache.org/viewvc?view=rev&revision=649791 Regards RĂ¼diger
RE: cache POST requests
I did mean requests, yes. We run a content reformatting service using Tomcat, so in reality the responses are large also, because nearly the same content (formatted differently) is sent back. Another problem I found with large GETs is that IE will truncation them if they are 2k or larger. Now, if that is pre- rendering engine or in the actual engine, I don't know? If it's pre then maybe the limit wouldn't be affected a redirect? That gets a little off-topic. Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com -Original Message- From: Houser, Rick [mailto:houser.r...@aoins.com] Sent: Tuesday, February 10, 2009 12:20 PM To: modules-...@httpd.apache.org Subject: RE: cache POST requests You mean post REQUESTS, not RESPONSES, correct? GET requests shouldn't be very large, but it's not all that uncommon to have GET responses larger than 1GB (local LANS, etc.). Accept all the incomming data on a post (which could be 1+MB file attachments, etc.), generate a unique URL, and redirect the user there to fetch the result. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:52 PM To: modules-...@httpd.apache.org Subject: RE: cache POST requests I read that for the 1.0 spec, but thought for the 1.1 it was possible with the proper expiration headers. Although I do understand the keying problem. My problem is that my POSTs vary wildly in size from 5k to over a meg, and avg. out to about 45k. Being that GETs in apache by default are limited to 8k, I'll get a 414 error, so I'm not sure where I can turn to cache this. I suppose I can up the LimitRequestLine parameter to the max I need, but I'm not sure how kosher that is. Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com -Original Message- From: Houser, Rick [mailto:houser.r...@aoins.com] Sent: Tuesday, February 10, 2009 11:37 AM To: modules-...@httpd.apache.org; us...@httpd.apache.org Subject: RE: cache POST requests You really shouldn't be trying to cache responses to post requests. Completely from memory, but the HTTP spec says not to cache post responses. The URI is the base key to any caching implementations (with the addition of a select few vary headers, etc.), and your post data really doesn't factor in. The normal pattern to use in most of these situations is http://en.wikipedia.org/wiki/Post/Redirect/Get. Think of a post as a submission from the client. Once you have that submission, just tell the client where to get the appropriate resource with a GET request and leave the heavy lifting/caching until that request comes in. Thanks, Rick Houser Auto-Owners Insurance Systems Support (517)703-2580 -Original Message- From: Anthony J. Biacco [mailto:abia...@formatdynamics.com] Sent: Tuesday, February 10, 2009 1:25 PM To: us...@httpd.apache.org Cc: modules-...@httpd.apache.org Subject: cache POST requests Hello, I'm using apache 2.2.11 on centos 5/x86_64 I'm testing out caching data for GET requests using mod_disk_cache, which I have working. I'd also like to cache data for the same requests via the POST method, but this doesn't seem to work. Is this supported? If so, is there any config changes required for this to work? If not, is this feature planned? Thanx, -Tony --- Manager, IT Operations Format Dynamics, Inc. 303-573-1800x27 abia...@formatdynamics.com http://www.formatdynamics.com
Re: svn commit: r742992 - in /httpd/httpd/trunk: include/ap_slotmem.h modules/mem/mod_plainmem.c modules/mem/mod_sharedmem.c server/slotmem.c
On Feb 10, 2009, at 2:50 PM, Ruediger Pluem wrote: Why are put and get not added to storage? Stupidity :)
Re: CacheIgnoreHeaders not working correctly
Ruediger Pluem wrote: Have a look at http://svn.apache.org/viewvc?view=rev&revision=649162 http://svn.apache.org/viewvc?view=rev&revision=649791 This needed an MMN bump though, so it won't work for v2.2. :( Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
mod_cache and module-based authentication
Sorry to bother you all, and first: thanks for building such a great product! My question is related to the patch suggested by Paul Quenna (in 2005) where mod_cache is allowed to be configured to run as a "normal" handler instead of always as a "quick handler". The initial patch and related discussion is here: http://thread.gmane.org/gmane.comp.apache.devel/20314 with a follow-up related to bug 36937 here: http://thread.gmane.org/gmane.comp.apache.devel/22676/focus=22679 Originally, the patch was voted down, and I see the rationale for not including it without a clear use case. But in our setting, it would be of high value to get it into the official release. I have outlined our syste below, in the hope that someone can decide whether we are within the planned scope of mod_cache or not. ** Motivation Our server setup is as follows: * PROXY: Apache 2.2.11 with mod_cache and mod_proxy. * APPSERVER: A JavaEE application server running a Java-based CMS. The main part of the motivation for PROXY is to reduce the need to regenerate content by the heavyweight CMS-processes. Security is also handled by PROXY, where we use Sun Access Manager to enforce policy-based authentication. This is provided by running a "policy agent" which is loaded as an Apache-module. Policies are specified on quite broad level, e.g. "all users that are marked 'internal' in our LDAP can see everything, others can see nothing". When PROXY receives a request for a non-cached URL, the agent first authenticates the user (either by using buil-in SSO modules or through password authentication), and then decides whether the user is allowed to access this URL. If so, the request is forwarded from PROXY to APPSERVER and can be served without further validation. The challenge is that since we use mod_cache on PROXY, requests that can be served from cache are returned directly to the user - without ever being seen by the policy agent. This is of course as expected, since mod_cache uses a special "quick handler" in Apache's request chain, allowing requests for cached objects to be served with minimal processing overhead. But as noted above, it is necessary for us to protect the cache against unauthorized users. Our current workaround is to run two reverse proxy-instances, one which provides authentication (on port 80) and another providing cache (on port 7920, which is only accessible from within PROXY). A request then first hits the authentication proxy on port 80, and if valid, is forwarded to the caching proxy on local port 7920. This works, but it feels somewhat suboptimal, and we would much prefer to be able to use one instance to serve both purposes. Thank you in advance for any assistance! -- Kind regards, Jon Grov
Re: WELCOME to modules-...@httpd.apache.org
dave wrote: Hi All, I'm having trouble with the server_rec->module_config variable and perhaps I am misunderstanding something. With your last post (the creation), it looks like you understand it well. Are you only creating one process? For example, run it with a -X parameter (to prevent fork()ing) so you can ensure that you aren't working across processes? Printing the PID along with the %d? Is the %d in the handler always the same? Joe
Re: mod_cache and module-based authentication
Jon Grov wrote: Our current workaround is to run two reverse proxy-instances, one which provides authentication (on port 80) and another providing cache (on port 7920, which is only accessible from within PROXY). A request then first hits the authentication proxy on port 80, and if valid, is forwarded to the caching proxy on local port 7920. This works, but it feels somewhat suboptimal, and we would much prefer to be able to use one instance to serve both purposes. I have been tasked with solving a very similar problem: the ability to optionally place the cache anywhere in the output filter chain (instead of replacing the whole filter chain, as now). The rationale is that we need to cache content before the INCLUDES filter gets hold of the content, and that is currently not possible. Give me a day or two. Regards, Graham -- smime.p7s Description: S/MIME Cryptographic Signature
Where is srclib/apr-util/xml/Makefile.in ?
Hi, When I untar 2.2.10 or 2.2.11, I don't see srclib/apr-util/xml/Makefile.in. Has it been removed intentionally ? Or, is it a bug ? I'm trying to build 2.2.10 with builtin xml/expat. If there is any work around for this, please let me know. Thanks in advance ! - Ravindra