RE: Changing Connection header use in forward proxy
-Original Message- From: Plüm, Rüdiger, VF-Group [mailto:ruediger.pl...@vodafone.com] Sent: Thursday, August 05, 2010 8:03 PM To: dev@httpd.apache.org Subject: RE: Changing Connection header use in forward proxy -Original Message- From: Rainer Jung Sent: Donnerstag, 5. August 2010 11:51 To: dev@httpd.apache.org Subject: Changing Connection header use in forward proxy It seems the forward proxy by default sets Connection: Keep-Alive although it later closes the connection when it detects is_address_reusable == 1 in the worker. Doesn't it make sense to issue Connection close instead? Since this happens in the code where we have a connection and a request, but no worker struct available, is it safe to apply this whenever proxyreq is PROXYREQ_PROXY? Or can one somehow configure a non-default forward worker with an explicit URL to e.g. a frequently used origin server which would then be expected to reuse connections? I don't expect that to be posible, but I'm not sure. I haven't tried so far, but IMHO it should be possible to setup pooled connections for frequently used targets of a forward proxy by adding Proxy http://www.frequentlyused.com/ # Set an arbitrary parameter to trigger the creation of a worker ProxySet keepalive=on /Proxy If I am correct with this assumption the we should not proceed the way you proposed. How about the following untested patch instead? Index: modules/proxy/proxy_util.c === --- modules/proxy/proxy_util.c (revision 982130) +++ modules/proxy/proxy_util.c (working copy) @@ -1568,6 +1568,7 @@ *balancer = NULL; *worker = conf-forward; access_status = OK; +apr_table_set(r-subprocess_env, proxy-nokeepalive, 1); } } else if (r-proxyreq == PROXYREQ_REVERSE) { @@ -1578,6 +1579,7 @@ *balancer = NULL; *worker = conf-reverse; access_status = OK; +apr_table_set(r-subprocess_env, proxy-nokeepalive, 1); } } } Regards Rüdiger Hi Rüdiger, I've applied this patch on 2.2.16 and tested. As the result, the appropriate Connection header is now sent to the origin server, as expected. For the usual forward proxy requests, Connection: close header is set. For the forward proxy requests for the origin server individually defined as ProxySet keepalive=on, Connection: Keep-Alive header is set. I feel it's a kind of pity that the origin server keep-alive is not fully provided to general forward proxy requests, but anyway I'm glad that the Connection header becomes consistent to the actual behavior. Thanks! Regards, Shibuya
RE: Talking about proxy workers
that in the reverse proxy. Regards, Ryujiro Shibuya
RE: OS Keep-alive on forward proxy
Hi Rainer, I meant to talk about the HTTP Keep-alive to the origin server (OS), not about TCP Keep-alive of operating system (OS). Excuse me for my confusing description. And yes: the forward proxy does *not* do HTTP Keepalive. Technical reason: the connections to the origin server are pooled and retrieved from and returned to the pool for each request. A forward proxy usually talks to many diferent origin servers. Keeping those connections open in a naive way would lead to a lot of not well used pools. Assuming that during one client connection the origin server often is used for multiple requests this could be improved, but would bloat the already complicated proxy code even more. And thank you for the answer and reasons! That is what I wanted to hear. So I understand the HTTP Keep-alive to the origin server is only supported for reverse proxy, as the design of httpd 2.2/2.3. Regards, Ryujiro Shibuya
OS Keep-alive on forward proxy
Hi, Can you tell if the OS Keep-alive feature should be expected to work on forward proxy? The mod_proxy of latest httpd 2.2 and trunk (2.3) has OS Keep-alive (Keep-alive connection to origin server) feature, but it seems not working when we configure the httpd as the forward proxy server. If the client connection is kept alive, the proxy sends indeed Connection: Keep-alive to the origin server but the proxy doesn't actually try to keep the OS connection alive; it closes the OS connection immediately after receiving the response. The inconsistency between sending Connection: Keep-alive and immediate connection closing looks like a defect (and I've just posted a bug report 49699). But apart from the inconsistency, I wonder if the OS Keep-alive of mod_proxy is designed to work for the forward proxy in the first place. Regards, Ryujiro Shibuya
RE: Age calculation in mod_cache.
Hello Rüdiger, Thank you for quick response and quick fix! Regards, Shibuya -Original Message- From: Plüm, Rüdiger, VF-Group [mailto:ruediger.pl...@vodafone.com] Sent: Wednesday, April 14, 2010 5:58 PM To: dev@httpd.apache.org Subject: RE: Age calculation in mod_cache. -Original Message- From: Ryujiro Shibuya Sent: Mittwoch, 14. April 2010 03:35 To: dev@httpd.apache.org Subject: Age calculation in mod_cache. Hello, A minor issue in the age calculation in mod_cache [ap_cache_current_age() in cache_util.c] is found. In some unusual conditions, the age of cached content can be calculated as negative value. The negative age value will be casted into a huge unsigned integer later, and then the inappropriate Age header e.g. Age: 4294963617 (= more than 135 years) may be returned to the client. In my opinion, the negative age should be adjusted to zero, at least. What are your thoughts? Makes sense. Fixed in trunk as r933886. Regards Rüdiger
Age calculation in mod_cache.
Hello, A minor issue in the age calculation in mod_cache [ap_cache_current_age() in cache_util.c] is found. In some unusual conditions, the age of cached content can be calculated as negative value. The negative age value will be casted into a huge unsigned integer later, and then the inappropriate Age header e.g. Age: 4294963617 (= more than 135 years) may be returned to the client. In my opinion, the negative age should be adjusted to zero, at least. What are your thoughts? By the way, the condition to trigger this issue is, for example, the system time is rewound after the creation of cache entry, such that the request/response time recorded in the cache entry would tell the future time. Or this may be more likely to happen if disk cache is deployed with shared disk storage, and system times are not synchronized well. Regards, Shibuya
Lifetime of connection-pool in mpm worker
Hi, I'm engaged in development of application (original module) upon apache 2.0 with mod_proxy, mpm_worker, etc on Solaris platform. I have a few question related to connection-pool in mpm worker. I had believed that the lifetime of each connection pool (the pool tagged transaction and often referred as (request_rec*)r-conn-pool) is basically same as a lifetime of TCP connection with peer. But recently I found that the pool is not destructed at the end of connection and it is reused for upcoming connection, in mpm worker. So the connection pools are persistent, as long as the process lives. (As far as I know, it same on the latest apache 2.2 and mpm worker) It would not be a problem if we were using simply apache + mod_proxy + mpm_worker, but it would cause a problem in the case that some additional module (like ours) is used and it allocates rather large memory for some reason with apr_allocator_alloc() on the connection pool. If the programmer believes the lifetime of the pool is same as TCP connection, the size of pool could drastically grow as the following scenario shows (assume single worker thread for simplification): 1-1. The first request is coming. 1-2. The mod_XYZ allocates 100MB with apr_allocator_alloc() on connection pool. A memory block is allocated with malloc(). 1-3. Then the mod_XYZ frees the above 100MB memory block with apr_allocator_free(). The memory block is added into [connection pool]-allocator-free[0]. 1-4. Connection with client is closed. 2-1. The second request is coming. 2-2. The mod_XYZ allocates 99MB with apr_allocator_alloc() on connection pool. The 100MB memory block pooled as [connection pool]-allocator-free[0] is leased for this memory request. 2-3. Then the mod_XYZ frees the above 100MB memory block with apr_allocator_free(). The memory block is added into [connection pool]-allocator-free[0] again. 2-4. Connection with client is closed. 3-1. The third request is coming. 3-2. The mod_XYZ allocates 101MB with apr_allocator_alloc() on connection pool. A memory block is allocated with malloc(). 3-3. Then the mod_XYZ frees the above 101MB memory block with apr_allocator_free(). The memory block is added into [connection pool]-allocator-free[0]. There are 100MB and 101MB blocks. 3-4. Connection with client is closed. 4-1. The next request is coming. 4-2. The mod_XYZ allocates 102MB with apr_allocator_alloc() on connection pool. A memory block is allocated with malloc(). 4-3. Then the mod_XYZ frees the above 102MB memory block with apr_allocator_free(). The memory block is added into [connection pool]-allocator-free[0]. There are 100MB, 101MB, and 102MB blocks. 4-4. Connection with client is closed. So, would some of you answer to my following questions? - Should the programmer avoid to use apr_allocator_alloc() on connection-pool for large memory block? - Or does the mpm worker need to destruct the connection pool, against the current implementation? - Or, first of all, is it appropriate that a huge memory block is pooled in [pool]-allocator-free[0] by apr_allocator_free()? The pooled memory in free[0] area could be leased even for very small memory request. Thanks, Ryujiro Shibuya