Bug report for Apache httpd-1.3 [2008/10/19]
+---+ | Bugzilla Bug ID | | +-+ | | Status: UNC=Unconfirmed NEW=New ASS=Assigned| | | OPN=ReopenedVER=Verified(Skipped Closed/Resolved) | | | +-+ | | | Severity: BLK=Blocker CRI=Critical REG=Regression MAJ=Major | | | | MIN=Minor NOR=NormalENH=Enhancement TRV=Trivial | | | | +-+ | | | | Date Posted | | | | | +--+ | | | | | Description | | | | | | | |10744|New|Nor|2002-07-12|suexec might fail to open log file| |10747|New|Maj|2002-07-12|ftp SIZE command and 'smart' ftp servers results i| |10760|New|Maj|2002-07-12|empty ftp directory listings from cached ftp direc| |14518|Opn|Nor|2002-11-13|QUERY_STRING parts not incorporated by mod_rewrite| |16013|Opn|Nor|2003-01-13|Fooling mod_autoindex + IndexIgnore | |16631|Inf|Min|2003-01-31|.htaccess errors logged outside the virtual host l| |17318|Inf|Cri|2003-02-23|Abend on deleting a temporary cache file if proxy | |19279|Inf|Min|2003-04-24|Invalid chmod options in solaris build| |21637|Inf|Nor|2003-07-16|Timeout causes a status code of 200 to be logged | |21777|Inf|Min|2003-07-21|mod_mime_magic doesn't handle little gif files| |22618|New|Maj|2003-08-21|MultiViews invalidates PATH_TRANSLATED if cgi-wrap| |25057|Inf|Maj|2003-11-27|Empty PUT access control in .htaccess overrides co| |26126|New|Nor|2004-01-14|mod_include hangs with request body | |26152|Ass|Nor|2004-01-15|Apache 1.3.29 and below directory traversal vulner| |26790|New|Maj|2004-02-09|error deleting old cache file | |29257|Opn|Nor|2004-05-27|Problem with apache-1.3.31 and mod_frontpage (dso,| |29498|New|Maj|2004-06-10|non-anonymous ftp broken in mod_proxy | |29538|Ass|Enh|2004-06-12|No facility used in ErrorLog to syslog| |30207|New|Nor|2004-07-20|Piped logs don't close read end of pipe | |30877|New|Nor|2004-08-26|htpasswd clears passwd file on Sun when /var/tmp i| |30909|New|Cri|2004-08-28|sporadic segfault resulting in broken connections | |31975|New|Nor|2004-10-29|httpd-1.3.33: buffer overflow in htpasswd if calle| |32078|New|Enh|2004-11-05|clean up some compiler warnings | |32539|New|Trv|2004-12-06|[PATCH] configure --enable-shared= brocken on SuSE| |32974|Inf|Maj|2005-01-06|Client IP not set | |33086|New|Nor|2005-01-13|unconsistency betwen 404 displayed path and server| |33495|Inf|Cri|2005-02-10|Apache crashes with "WSADuplicateSocket failed for| |33772|New|Nor|2005-02-28|inconsistency in manual and error reporting by sue| |33875|New|Enh|2005-03-07|Apache processes consuming CPU| |34108|New|Nor|2005-03-21|mod_negotiation changes mtime to mtime of Document| |34114|New|Nor|2005-03-21|Apache could interleave log entries when writing t| |34404|Inf|Blk|2005-04-11|RewriteMap prg can not handle fpout | |34571|Inf|Maj|2005-04-22|Apache 1.3.33 stops logging vhost| |34573|Inf|Maj|2005-04-22|.htaccess not working / mod_auth_mysql| |35424|New|Nor|2005-06-20|httpd disconnect in Timeout on CGI| |35439|New|Nor|2005-06-21|Problem with remove "/../" in util.c and mod_rewri| |35547|Inf|Maj|2005-06-29|Problems with libapreq 1.2 and Apache::Cookie | |3|New|Nor|2005-06-30|Can't find DBM on Debian Sarge| |36375|Opn|Nor|2005-08-26|Cannot include http_config.h from C++ file| |37166|New|Nor|2005-10-19|Under certain conditions, mod_cgi delivers an empt| |37185|New|Enh|2005-10-20|AddIcon, AddIconByType for OpenDocument format| |37252|New|Reg|2005-10-26|gen_test_char reject NLS string | |38989|New|Nor|2006-03-15|restart + piped logs stalls httpd for 24 minutes (| |39104|New|Enh|2006-03-25|[FR] fix build with -Wl,--as-needed | |39287|New|Nor|2006-04-12|Incorrect If-Modified-Since validation (due to syn| |39937|New|Nor|2006-06-30|Garbage output if README.html is gzipped or compre| |40176|New|Nor|2006-08-03|magic and mime| |40224|Ver|Nor|2006-08-10|System time crashes Apache @year 2038 (win32 only?| |41279|New|Nor|2007-01-02|Apache 1.3.37 htpasswd is vulnerable to buffer ove| |42355|New|Maj|2007-05-08|Apache 1.3 permits non-rfc HTTP error code >= 600 | |43626|New|Maj|2007-10-15|r->path_info returning invalid value | |44768|
Fwd: [EMAIL PROTECTED] DirectoryIndex reset ?
On Sun, Oct 19, 2008 at 6:57 PM, William A. Rowe, Jr. <[EMAIL PROTECTED]> wrote: > Provided that trunk doesn't recognize 'disabled' unless it's a single > argval, then > > DirectoryIndex disabled > > turns it off > > DirectoryIndex disabled nothere > > would serve disabled[.html] and then nothere[.html]. > > Can we also ensure it's only 'disabled' (vs 'disable' vs 'none) so there > is only one special-pattern to work around? > > (Of course DirectoryIndex disabled.html is just fine to serve that doc.) I think I have a match for the semantics, but not sure shoehorning into DirectoryIndex is still worth saving an ugly "DirectoryIndexDisable"-like directive. http://people.apache.org/~covener/mod_dir-disabled.diff -- Eric Covener [EMAIL PROTECTED]
Re: CRL verification in mod_ssl
2008/10/15 Dr Stephen Henson <[EMAIL PROTECTED]>: > Erwann ABALEA wrote: >> 2008/10/15 Dr Stephen Henson <[EMAIL PROTECTED]>: >>> Dirk-Willem van Gulik wrote: On Aug 28, 2008, at 9:41 PM, Nicob wrote: >> [...] > This issue does have some security implications. For example a revoked > client certificate could appear valid by substituting a delta CRL for a > full CRL. As I said, you're right. CRL verification is definitely not done properly by the most widely adopted web server... The threat is also linked to how much trust is put into the CA producing those CRLs. [...] > I'll have a look at that in more detail. There are some security issues > with separate CRL and certificate signing keys which were debated on the > PKIX lists some time ago. Mr Patterson told me about these issues, but the URLs given pointed to a "solution" proposed by someone from Microsoft to circumvent the fact that the CAPI doesn't handle it either. The solution was to have both the old and new keys sign the CRL. I'll try to google some informations about this discussion. > Multiple paths can be used by OpenSSL but each path can contain both > CRLs and certificates. > > Would there be a need to be able to restrict paths to one or the other? No need to do that. The real need is to make sure identical configuration variables will lead to the same behaviour. > CRL refresh has some performance issues particularly in multi-process > servers. For example a CRL might be 500K or more and be reloaded on each > new connection. CRL doesn't need to be "refreshed" on each connection, of course. It should at least be done on a regular basis. CRLs need to be carefully verified, I don't think you'll disagree with that. And "properly" also means checking up-to date CRLs. Talking big numbers... We operate several CAs, the biggest CRL we produce is a 83MB one, today... This, in itself, is an aberration, but as we say here, "le client est roi" (could be translated to: the customer is king). So this CRL exists, and the webservers this customer operates do check them. In fact, the same customer divided its population into 4 pieces, hence 4 CAs, and the 4 CRLs all together "weigh" 260MB. Refreshing those CRLs on each connection is of course out of question, everybody agrees with that. But it still needs to be refreshed on a periodic basis. Those CRLs are produced at least once a day. Some of them every hour. That's an extreme case, one OpenSSL can't handle properly (the memory needed to parse the CRL is really huge). But the arguments are still valid, for smaller CRLs. > OpenSSL 0.9.9 does have some reload support though. If > CRL processing was delegated to OpenSSL it would be available automatically. What is the decision criteria to reload a CRL? expiration of the "notAfter" date? An application based period would be better. [PKITS tests] > It should be OK. The script pkits-run.pl sets the necessary flags for > each case. It wont verify all such cases by default. Thanks, I played with it, it seems there are new execution paths for me to discover :) All in all, CRL validation performed by mod_ssl exists, that's good, since it was written when OpenSSL didn't provide a "clean" solution. But it has some several flaws, and now OpenSSL is able to do the job, maybe it's time to rewrite the glue between them? -- Erwann.
Re: leak on graceful restarts
On 10/19/2008 07:25 PM, Jim Jagielski wrote: > > On Oct 18, 2008, at 3:04 PM, Rainer Jung wrote: > >> >> >> Ruediger Pluem schrieb: >>> >>> On 10/18/2008 01:25 AM, Paul Querna wrote: Looking at a problem that seems easy to re-produce using un-patched trunk, 2.2.10 and 2.0.63. Using a graceful restart causes higher memory usage in the parent, which is then passed on to the 'new' children processes. And, at least over here, httpd consistently grows in RSS, without any obvious cause. Seems reproducible on Ubuntu and Darwin, using 2.2.10, 2.0.63 and trunk. Any ideas? >>> >>> Two quick thoughts: >>> >>> 1. Memory fragmentation in the allocator lists (we had this >>> discussion either here >>> or on [EMAIL PROTECTED] a short time ago). >>> >>> 2. At some locations we use a global pool (process->pool) to allocate >>> memory, e.g. mod_ssl >>> and when setting up the listeners. I haven't checked so far if this >>> global pool usage is >>> justified. >> >> Using my production configurations on Solaris with 2.2.10 worker I can >> only reproduce a leak during graceful restart when loading mod_ssl. The >> memory size does not always increase though, after a couple of restarts >> it decreases again, but not back to the previous minimum so over all >> there is a small leak related to restarts. >> > > This is weird... I can recreate this under OS X but not under Sol10, > and only with mod_ssl in the mix as well. But at least it appears that > mod_ssl is the main culprit. In 2.2.x I guess we leak in ssl_scache_shmcb_init when we create a shared memory segment passing apr_shm_create a global pool. Maybe the actual amount of the leak depends on the platform specific details of the shm implementation. As said this is just a guess. AFAICT this leak does not happen on trunk. Regards Rüdiger
Re: strange usage pattern for child processes
On 10/19/2008 07:35 PM, Jim Jagielski wrote: > > On Oct 18, 2008, at 4:22 PM, Graham Leggett wrote: > >> Ruediger Pluem wrote: >> As a result, the connection pool has made the server slower, not faster, and very much needs to be fixed. >>> I agree in theory. But I don't think so in practice. >> >> Unfortunately I know so in practice. In this example we are seeing >> single connections being held open for 30 second or more. :( >> >>> 1. 2.0.x behaviour: If you did use keepalive connections to the backend >>> the connection to the backenend was kept alive and as it was bound >>> to the >>> frontend connection in 2.0.x it couldn't be used by other connections. >>> Depending on the backend server it wasted the same number of resources >>> as without the optimization (backend like httpd worker, httpd >>> prefork) or >>> a small amount of resources (backend like httpd event with HTTP or >>> a recent >>> Tomcat web connector). So you didn't benefit very well from this >>> optimization >>> in 2.0.x as long as you did not turn off the keepalives to the >>> backend. >> >> Those who did need the optimisation, would have turned off keepalives >> to the backend. >> >>> > > Trying to grok things better, but doesn't this imply that > for those who needed the optimization, disabling the > connection pool would be the required work-around for 2.2? No. Without a connection pool (e.g. the default reverse worker) the backend connection does not get freed any faster than without a connection pool. Ok strictly spoken you cannot turn off the connection pools at all (reverse is also one), you can only turn off a reuse of the connections. Regards Rüdiger
Re: strange usage pattern for child processes
On Oct 18, 2008, at 4:22 PM, Graham Leggett wrote: Ruediger Pluem wrote: As a result, the connection pool has made the server slower, not faster, and very much needs to be fixed. I agree in theory. But I don't think so in practice. Unfortunately I know so in practice. In this example we are seeing single connections being held open for 30 second or more. :( 1. 2.0.x behaviour: If you did use keepalive connections to the backend the connection to the backenend was kept alive and as it was bound to the frontend connection in 2.0.x it couldn't be used by other connections. Depending on the backend server it wasted the same number of resources as without the optimization (backend like httpd worker, httpd prefork) or a small amount of resources (backend like httpd event with HTTP or a recent Tomcat web connector). So you didn't benefit very well from this optimization in 2.0.x as long as you did not turn off the keepalives to the backend. Those who did need the optimisation, would have turned off keepalives to the backend. Trying to grok things better, but doesn't this imply that for those who needed the optimization, disabling the connection pool would be the required work-around for 2.2?
Re: leak on graceful restarts
On Oct 18, 2008, at 3:04 PM, Rainer Jung wrote: Ruediger Pluem schrieb: On 10/18/2008 01:25 AM, Paul Querna wrote: Looking at a problem that seems easy to re-produce using un-patched trunk, 2.2.10 and 2.0.63. Using a graceful restart causes higher memory usage in the parent, which is then passed on to the 'new' children processes. And, at least over here, httpd consistently grows in RSS, without any obvious cause. Seems reproducible on Ubuntu and Darwin, using 2.2.10, 2.0.63 and trunk. Any ideas? Two quick thoughts: 1. Memory fragmentation in the allocator lists (we had this discussion either here or on [EMAIL PROTECTED] a short time ago). 2. At some locations we use a global pool (process->pool) to allocate memory, e.g. mod_ssl and when setting up the listeners. I haven't checked so far if this global pool usage is justified. Using my production configurations on Solaris with 2.2.10 worker I can only reproduce a leak during graceful restart when loading mod_ssl. The memory size does not always increase though, after a couple of restarts it decreases again, but not back to the previous minimum so over all there is a small leak related to restarts. This is weird... I can recreate this under OS X but not under Sol10, and only with mod_ssl in the mix as well. But at least it appears that mod_ssl is the main culprit.
Re: mod_dbd and prepared statements (httpd-2.2.9)
Hi Tom, Thanks a lot for your help! Everything works as expected... Cheers, Andrej
Memory leak in mod_proxy_http and in mpm_event components.
Hello All, After spending quite some time on apache and components ( specifically mod_proxy_http, mod_mem_cache etc) we have noticed some memory leaks which we are publishing with attached patch. The leak can be observed when requests are sent on keep-alive connection without closing the connection. As long as the connection is open, in apache we tend to allocate some memory from connection pool for processing each request. So when you have more requests per connection you tend to see apache consuming more memory. If we release the connection and we see that apache frees the memory and don't see this leaks impact. I have also attached plots ( thanks to Paul for good article http://journal.paul.querna.org/articles/2005/02/23/apr-memory-pools-rock/ ) on memory utilization of apr pools which clearly show the issue. The plots are drawn using apr pool logs collected with and without the attached patch for 25K requests on one connection. Please help us to know if there is any issue with this patch, we have found this patch working at high loads for long time period confirming the patch doesn't break code and also see that without closing the connection with more requests we don;t see any big time leaks. But we are not sure if this patch breaks any of the features apache has implemented, which we are not aware of. thanks to every on apache support. -regards Harish //depot/httproxy/httpd-2.2.9/modules/proxy/mod_proxy_http.c#4 - /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/modules/proxy/mod_proxy_http.c --- /tmp/tmp.30411.87 2008-10-14 19:59:29.0 +0530 +++ /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/modules/proxy/mod_proxy_http.c 2008-10-14 19:35:39.0 +0530 @@ -1895,9 +1895,12 @@ static int proxy_http_handler(request_re * connection ID of the current upstream connection is the same as that * of the connection when the socket was opened. */ -apr_pool_t *p = r->connection->pool; +// Fix for bugId:9 +// The allocation from connection pool to request pool was done +// to avoid request processing allocating memory from connection pool. +apr_pool_t *p = r->pool; conn_rec *c = r->connection; -apr_uri_t *uri = apr_palloc(r->connection->pool, sizeof(*uri)); +apr_uri_t *uri = apr_palloc(r->pool, sizeof(*uri)); /* find the scheme */ u = strchr(url, ':'); //depot/httproxy/httpd-2.2.9/server/mpm/experimental/event/event.c#1 - /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/server/mpm/experimental/event/event.c --- /tmp/tmp.30411.142 2008-10-14 19:59:29.0 +0530 +++ /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/server/mpm/experimental/event/event.c 2008-10-14 19:50:53.0 +0530 @@ -566,9 +566,18 @@ static int process_socket(apr_pool_t * p int csd; int rc; apr_time_t time_now = 0; -ap_sb_handle_t *sbh; +ap_sb_handle_t *sbh = NULL; +// Fix for bugId:9 +// memory from connection pool was getting used every time +// an new request is served and if case if the connection is keep-alive +// this memory never gets freed and we see a leak. +// The fix would help allocation of memory to hold child_num and thread_num +// in ap_create_sb_handle once in life time of connection +#if 0 ap_create_sb_handle(&sbh, p, my_child_num, my_thread_num); +#endif /* 0 */ + apr_os_sock_get(&csd, sock); time_now = apr_time_now(); @@ -577,6 +586,9 @@ static int process_socket(apr_pool_t * p cs = apr_pcalloc(p, sizeof(conn_state_t)); +// Fix for bugId:9 +ap_create_sb_handle(&sbh, p, my_child_num, my_thread_num); + pt = apr_pcalloc(p, sizeof(*pt)); cs->bucket_alloc = apr_bucket_alloc_create(p); @@ -620,7 +632,12 @@ static int process_socket(apr_pool_t * p } else { c = cs->c; +// Fix for bugId:9 +#if 0 c->sbh = sbh; +#endif /* 0 */ +sbh = c->sbh; +ap_create_sb_handle(&sbh, p, my_child_num, my_thread_num); } if (c->clogging_input_filters && !c->aborted) { //depot/httproxy/httpd-2.2.9/server/scoreboard.c#1 - /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/server/scoreboard.c --- /tmp/tmp.30411.190 2008-10-14 19:59:29.0 +0530 +++ /home/harish/harish-desktop/depot/httproxy/httpd-2.2.9/server/scoreboard.c 2008-10-14 19:25:18.0 +0530 @@ -378,7 +378,10 @@ int find_child_by_pid(apr_proc_t *pid) AP_DECLARE(void) ap_create_sb_handle(ap_sb_handle_t **new_sbh, apr_pool_t *p, int child_num, int thread_num) { -*new_sbh = (ap_sb_handle_t *)apr_palloc(p, sizeof(ap_sb_handle_t)); +// fix for bugId: 9 +if ( *new_sbh == NULL ) + *new_sbh = (ap_sb_handle_t *)apr_palloc(p, sizeof(ap_sb_handle_t)); +// ends of fix (*new_sbh)->child_num = child_num; (*new_sbh)->thread_num = thread_num; }
Re: mod_dbd and prepared statements (httpd-2.2.9)
Andrej van der Zee wrote: * apr_dbd_pvquery is only for string values. You must use apr_dbd_pvbquery (with a "b") for binary values. see: http://apr.apache.org/docs/apr-util/1.3/group___a_p_r___util___d_b_d.html I tried both versions, but without success. But that was because I did not pass pointers to float obviously. * You don't pass a float value directly - %f takes a *pointer* to a float. O yes, thank you! That did the trick. I definitely missed this in the documentation. Is it there? http://apr.apache.org/docs/apr-util/1.3/group___a_p_r___util___d_b_d.html#g19608fa5d518a5121bee23daacc5c230 describes the binary datatypes. Note that they all take pointers, e.g. APR_DBD_TYPE_FLOAT %f : in, out: float* * It is best not to call ap_dbd_prepare and ap_dbd_acquire directly. You should populate your own function pointers at config time using APR_RETRIEVE_OPTIONAL_FN. If APR_RETRIEVE_OPTIONAL_FN gives you NULL pointers, that means that mod_dbd is not loaded. I have no problems with calling these function directly. If mod_dbd is not loaded, Apache simply does not start. But it does impose a restriction on the order that the modules are loaded. True, these functions still work when your module links directly to mod_dbd - but APR_RETRIEVE_OPTIONAL_FN is worth using for several reasons: 1) You won't need to load modules in a specific order (as you observed) 2) It lets you return meaningful error messages while processing config directives, e.g. if (dbd_acquire_fn == NULL) return "this directive requires mod_dbd"; -tom-
Re: strange usage pattern for child processes
On 10/19/2008 01:21 PM, Ruediger Pluem wrote: > > On 10/18/2008 10:22 PM, Graham Leggett wrote: >> Ruediger Pluem wrote: > >>>Plus the default socket and TCP buffers on most OS should be already >>>larger then this. So in order to profit from the optimization the time >>>the client needs to consume the ProxyIOBufferSize needs to be >>> remarkable. >> It makes no difference how large the TCP buffers are, the backend will >> only be released for reuse when the frontend has completely flushed and > > Sorry I maybe wrong here, but I don't think this is the case. If there is > space left in the TCP buffer you write to the socket non-blocking and the > data seems to be processed for the sending process then. It does not block > until the data is sent by the OS. And even flush buckets do not seem to cause > any special processing like a blocking flush. So returning to your CNN > example I am pretty much sure that if the TCP buffer for the socket of the > client connection holds 92k plus the header overhead your connection to the > backend will be released almost instantly. > I don't even think so that a close or shutdown on the socket will block until > all data in the TCP buffer is sent. But this wouldn't matter on keepalive > connection to the client anyway since the connection isn't closed. I did some further investigations in the meantime. Using the following configuration ProxyPass /cnn/ http://www.cnn.com/ Sendbuffersize 285168 ProxyIOBufferSize 131072 and the following "slow" testclient !/usr/bin/perl -w use strict; use Socket; my $proto; my $port; my $sin; my $addr; my $url; my $oldfh; $proto = getprotobyname('tcp'); $port = getservbyname('http','tcp'); $addr = "192.168.2.4"; $url = "/cnn/"; socket(SOCKET_H,PF_INET,SOCK_STREAM,$proto); $sin = sockaddr_in($port,inet_aton($addr)); setsockopt(SOCKET_H, SOL_SOCKET, SO_RCVBUF, 1); connect(SOCKET_H,$sin); $oldfh = select SOCKET_H; $| = 1; select $oldfh; print SOCKET_H "GET $url HTTP/1.0\r\n\r\n"; sleep(500); close SOCKET_H; I was able to have the connection to www.cnn.com returned back to the pool immediately. The strange thing that remains is that I needed to set the Sendbuffersize about 3 times higher than the actual size of the page to get this done. I currently do not know why this is the case. Another maybe funny sidenote: Because of the way the read method on socket buckets work and the way the core input filter works, the ap_get_brigade call when processing the http body of the backend response in mod_proxy_http never returns a brigade that contains more then 8k of data no matter what you set for ProxyIOBufferSize. And this is the case since 2.0.x days. So the optimization was always limited to sending at most 8k and in this case the TCP buffer (the send buffer) should have fixed this in many cases. Regards Rüdiger
Re: strange usage pattern for child processes
On 10/18/2008 10:22 PM, Graham Leggett wrote: > Ruediger Pluem wrote: > >>Plus the default socket and TCP buffers on most OS should be already >>larger then this. So in order to profit from the optimization the time >>the client needs to consume the ProxyIOBufferSize needs to be >> remarkable. > > It makes no difference how large the TCP buffers are, the backend will > only be released for reuse when the frontend has completely flushed and Sorry I maybe wrong here, but I don't think this is the case. If there is space left in the TCP buffer you write to the socket non-blocking and the data seems to be processed for the sending process then. It does not block until the data is sent by the OS. And even flush buckets do not seem to cause any special processing like a blocking flush. So returning to your CNN example I am pretty much sure that if the TCP buffer for the socket of the client connection holds 92k plus the header overhead your connection to the backend will be released almost instantly. I don't even think so that a close or shutdown on the socket will block until all data in the TCP buffer is sent. But this wouldn't matter on keepalive connection to the client anyway since the connection isn't closed. Regards Rüdiger