Re: How does set status code in filter work
Your filter should run before the content type is set. Then, you can do the usual redirect to /errors/HTTP_FORBIDDEN.html or whatever document you should be returning. Yes, a filter should only be working on the through-put, so you should be handling that. Perhaps someone knows about the pre-content-set filter types - I'm definitely not the expert. Joe John Zhang wrote: In an output filter. If something is wrong, and I set the request object's status to an error code, then am I responsible for providing the proper error page content? What is the correct way to handle errors in an output filter? Thanks, John
Re: apache 2.2 mod_proxy_http disable buffering
On 12/07/2007 10:19 PM, Pavel Stano wrote: thanks for patch, problem is solved Committed to trunk as r602349 (http://svn.apache.org/viewcvs.cgi?rev=602349view=rev) and added to the backport proposal as r602439 (http://svn.apache.org/viewcvs.cgi?rev=602439view=rev). Regards Rüdiger
Re: [PROPOSAL] introduce a global define for including unixd.h
On Dec 7, 2007 2:43 PM, Guenter Knauf [EMAIL PROTECTED] wrote: Hi, I came a couple of times already over the ifdefs to avoid the inclusion of unixd.h; also many 3rd party modules have this problem; therefore I would like to see a global define somewhere for that, f.e. AP(R?)_NEEDS_UNIXD_H or such; can we perhaps introduce that? This would in future avoid such platform-ifdefs like this: http://svn.apache.org/viewvc?view=revrevision=522933 Backing up a little, the interesting part that requires that header file is #if !defined(OS2) !defined(WIN32) !defined(BEOS) !defined(NETWARE) if (geteuid() == 0) { chown(fsc-limitdbfile, unixd_config.user_id, -1); chown(apr_pstrcat(p, fsc-limitdbfile, .LoCK, NULL), unixd_config.user_id, -1); chown(apr_pstrcat(p, fsc-limitdbfile, .db, NULL), unixd_config.user_id, -1); chown(apr_pstrcat(p, fsc-limitdbfile, .dir, NULL), unixd_config.user_id, -1); chown(apr_pstrcat(p, fsc-limitdbfile, .pag, NULL), unixd_config.user_id, -1); } #endif Consider providing a function in core that sets ownership of a file to the user id of web server child processes when appropriate, and let the file that implements that function worry about whether or not unixd.h is needed. All platforms can implement the function, no-op or not. ssl_scache_dbm.c has chown() logic like that above. I don't know if there are slight variations which should be accommodated.
Re: http://svn.apache.org/viewvc?view=revrevision=602469
On 12/08/2007 04:07 PM, Jim Jagielski wrote: I didn't see this patch in the commit list but did see it referred to in the 2.2 STATUS file... I'm reviewing the patch now but two things did stick out: -apr_brigade_cleanup(ctx-ctxbb); -APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); -break; -} -else if (APR_BUCKET_IS_FLUSH(b)) { +apr_brigade_cleanup(ctx-linebb); APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); -rv = ap_pass_brigade(f-next, passbb); -apr_brigade_cleanup(passbb); -if (rv != APR_SUCCESS) -return rv; +APR_BRIGADE_INSERT_TAIL(ctx-passbb, b); } +/* + * No need to handle FLUSH buckets separately as we call + * ap_pass_brigade anyway at the end of the loop. + */ else if (APR_BUCKET_IS_METADATA(b)) { APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); +APR_BRIGADE_INSERT_TAIL(ctx-passbb, b); The reason why the current code handles FLUSH separately is though, yes, the ap_pass_brigade is done at the end of the while loop, that is *only* done when we're done handling the full brigade... The intent was to honor flushes in the brigade immediately and reset at that point... Not sure why this was removed, since the old way is more efficient. In fact the patch does all this as it passes the passbb brigade down the chain after *each* processed bucket of the original brigade (the ap_pass_brigade is at *the end* of the while loop *not* after the while loop). So we don't process the whole brigade before passing it down the chain which would be indeed insane. In fact flush buckets are handled in the same manner as before the patch as they fall in the if (APR_BUCKET_IS_METADATA(b)) branch and the next code that is executed after the block there is an ap_pass_brigade at the end of the while loop I also see that AP_MIN_BYTES_TO_WRITE is no longer being I saw no need for us to do *any* buffering in the filter. If the data passed down the chain is not reasonable large for sending over the wire the core network filter will take care of this and buffer it until it is reasonable or a flush bucket is seen. used at all... I'm guessing MAX_BUCKETS is designed to replace that?? Again, the idea is to avoid extremely large in-process data sizes :) The MAX_BUCKETS seems The patch does that :-). to be mostly for super-bad cases and not so much for behaving nicely in all cases. (mod_include.c does the same thing, btw) No MAX_BUCKETS should not replace that. It is a safety measure for super bad cases like the example I gave in my initial review mail (http://mail-archives.apache.org/mod_mbox/httpd-dev/200712.mbox/[EMAIL PROTECTED]). Regards Rüdiger
http://svn.apache.org/viewvc?view=revrevision=602469
I didn't see this patch in the commit list but did see it referred to in the 2.2 STATUS file... I'm reviewing the patch now but two things did stick out: -apr_brigade_cleanup(ctx-ctxbb); -APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); -break; -} -else if (APR_BUCKET_IS_FLUSH(b)) { +apr_brigade_cleanup(ctx-linebb); APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); -rv = ap_pass_brigade(f-next, passbb); -apr_brigade_cleanup(passbb); -if (rv != APR_SUCCESS) -return rv; +APR_BRIGADE_INSERT_TAIL(ctx-passbb, b); } +/* + * No need to handle FLUSH buckets separately as we call + * ap_pass_brigade anyway at the end of the loop. + */ else if (APR_BUCKET_IS_METADATA(b)) { APR_BUCKET_REMOVE(b); -APR_BRIGADE_INSERT_TAIL(passbb, b); +APR_BRIGADE_INSERT_TAIL(ctx-passbb, b); The reason why the current code handles FLUSH separately is though, yes, the ap_pass_brigade is done at the end of the while loop, that is *only* done when we're done handling the full brigade... The intent was to honor flushes in the brigade immediately and reset at that point... Not sure why this was removed, since the old way is more efficient. I also see that AP_MIN_BYTES_TO_WRITE is no longer being used at all... I'm guessing MAX_BUCKETS is designed to replace that?? Again, the idea is to avoid extremely large in-process data sizes :) The MAX_BUCKETS seems to be mostly for super-bad cases and not so much for behaving nicely in all cases. (mod_include.c does the same thing, btw)
Enable persistence for SSL connections to the backend for reverse proxy
I hacked up a patch that enables the proxy to keep connections persistent in the HTTPS case. Having no persistence here was a big performance penalty. Basicly the persistence is created by keeping the conn_rec structure created for our backend connection (whether http or https) in the connection pool. This required to adjust scoreboard.c in a way that its functions can properly deal with a NULL scoreboard handle by ignoring the call or returning an error code. Thoughts, comments? Regards Rüdiger Index: server/scoreboard.c === --- server/scoreboard.c (revision 601660) +++ server/scoreboard.c (working copy) @@ -352,6 +352,9 @@ { worker_score *ws; +if (!sb) +return; + ws = ap_scoreboard_image-servers[sb-child_num][sb-thread_num]; #ifdef HAVE_TIMES @@ -479,6 +482,9 @@ AP_DECLARE(int) ap_update_child_status(ap_sb_handle_t *sbh, int status, request_rec *r) { +if (!sbh) +return -1; + return ap_update_child_status_from_indexes(sbh-child_num, sbh-thread_num, status, r); } @@ -487,6 +493,9 @@ { worker_score *ws; +if (!sbh) +return; + if (sbh-child_num 0) { return; } @@ -512,6 +521,9 @@ AP_DECLARE(worker_score *) ap_get_scoreboard_worker(ap_sb_handle_t *sbh) { +if (!sbh) +return NULL; + return ap_get_scoreboard_worker_from_indexes(sbh-child_num, sbh-thread_num); } Index: modules/proxy/proxy_util.c === --- modules/proxy/proxy_util.c (revision 601660) +++ modules/proxy/proxy_util.c (working copy) @@ -1599,6 +1599,9 @@ /* determine if the connection need to be closed */ if (conn-close) { apr_pool_t *p = conn-pool; +if (conn-connection) { +apr_pool_cleanup_kill(conn-pool, conn, connection_cleanup); +} apr_pool_clear(conn-pool); memset(conn, 0, sizeof(proxy_conn_rec)); conn-pool = p; @@ -1619,6 +1622,54 @@ return APR_SUCCESS; } +static apr_status_t socket_cleanup(proxy_conn_rec *conn) +{ +if (conn-sock) { +apr_socket_close(conn-sock); +conn-sock = NULL; +} +if (conn-connection) { +apr_pool_cleanup_kill(conn-pool, conn, connection_cleanup); +conn-connection = NULL; +} +return APR_SUCCESS; +} + +PROXY_DECLARE(apr_status_t) ap_proxy_ssl_connection_cleanup(proxy_conn_rec *conn, +request_rec *r) +{ +apr_bucket_brigade *bb; +apr_status_t rv; + +/* + * If we have an existing SSL connection it might be possible that the + * server sent some SSL message we have not read so far (e.g. a SSL + * shutdown message if the server closed the keepalive connection while + * the connection was held unused in our pool). + * So ensure that if present (= APR_NONBLOCK_READ) it is read and + * processed. We don't expect any data to be in the returned brigade. + */ +if (conn-sock conn-connection) { +bb = apr_brigade_create(r-pool, r-connection-bucket_alloc); +rv = ap_get_brigade(conn-connection-input_filters, bb, +AP_MODE_READBYTES, APR_NONBLOCK_READ, +HUGE_STRING_LEN); +if ((rv != APR_SUCCESS) !APR_STATUS_IS_EAGAIN(rv)) { +socket_cleanup(conn); +} +if (!APR_BRIGADE_EMPTY(bb)) { +apr_off_t len; + +rv = apr_brigade_length(bb, 0, len); +ap_log_rerror(APLOG_MARK, APLOG_DEBUG, rv, r, + proxy: SSL cleanup brigade contained % + APR_OFF_T_FMT bytes of data., len); +} +apr_brigade_destroy(bb); +} +return APR_SUCCESS; +} + /* reslist constructor */ static apr_status_t connection_constructor(void **resource, void *params, apr_pool_t *pool) @@ -1895,11 +1946,6 @@ ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, s, proxy: %s: has released connection for (%s), proxy_function, conn-worker-hostname); -/* If there is a connection kill it's cleanup */ -if (conn-connection) { -apr_pool_cleanup_kill(conn-connection-pool, conn, connection_cleanup); -conn-connection = NULL; -} connection_cleanup(conn); return OK; @@ -1972,14 +2018,7 @@ conn-hostname = apr_pstrdup(conn-pool, uri-hostname); conn-port = uri-port; } -if (conn-sock) { -apr_socket_close(conn-sock); -conn-sock = NULL; -} -if (conn-connection) { -apr_pool_cleanup_kill(conn-connection-pool, conn, connection_cleanup); -conn-connection = NULL; -} +
Re: svn commit: r602485 - /httpd/httpd/branches/2.2.x/STATUS
On Dec 8, 2007, at 10:44 AM, [EMAIL PROTECTED] wrote: +niq: You're missing my point. That this patch fixes a bug is + perfectly clear. But it does so by allocating (AP_IOBUFSIZE+1) + bytes. My point: isn't that likely to be horribly inefficient + if AP_IOBUFSIZE == 2^n+1? THEREFORE this fix should be + accompanied by decrementing AP_IOBUFSIZE, so that the + allocation of AP_IOBUFSIZE+1 bytes becomes 2^n. + Correct me if I'm wrong about that mattering ever, on + any platform and architecture. Nick, as I understand it, your issue is whether it is functionally inefficient to allocate space that is not a multiple of 2, is that right? That is a buffer of 2^n is better than one which is not? Since this is a local allocation and we're always just using AP_IOBUFSIZE worth of the data in it, it really doesn't matter. In fact, decreasing AP_IOBUFSIZE would be very bad because that is a value which is used for all sorts of network and buffering codepaths, in which case having a 2^n size *is* crucial. As far as the patch is concerned, the issue is that there are conditions where vd.vbuff.curpos is actually pointing to vrprintf_buf + AP_IOBUFSIZE, which is 1 outside of an allocation of vrprintf_buf[AP_IOBUFSIZE]. So when the *(vd.vbuff.curpos) = '\0'; is done, we're outside the bounds. There are 2 ways to fix this. They current way which is simply to ensure that vrprintf_buf + AP_IOBUFSIZE is inside the allocation OR we can simply drop the terminate string line. Both address the issue. The reason I did the 1st is that it fixed the issue and that I couldn't see the reason for the string-terminate line but figured it was there for a reason anyway... I've been able to profile all this and see that the line is really not needed at all. So in r602491 I've changed to the 2nd fix, even though the 1st is fine.
Re: time for 1.3.40 and 2.2.7 ?
On 11/27/2007 07:26 PM, Jim Jagielski wrote: With APR now out, I think we're close to releasing 1.3.40 and 2.2.7... Anyone opposed with that gameplan? There are 9 backport proposals currently in the STATUS file and 7 of them only miss one vote. The two remaining ones only require some more or less large adjustments to the proposal and should miss only one vote after that. So come on folks take some time and do some reviews please. Jim and I have only one vote :-). It would be really cool to get them in 2.2.7. Sorry for being so impatient :-). Regards Rüdiger
Re: time for 1.3.40 and 2.2.7 ?
On Sat, 08 Dec 2007 16:04:21 +0100 Ruediger Pluem [EMAIL PROTECTED] wrote: There are 9 backport proposals currently in the STATUS file and 7 of them only miss one vote. The two remaining ones only require some more or less large adjustments to the proposal and should miss only one vote after that. Just made a small update. More to come, time permitting. At this stage, we might want to make a distinction between must have and can wait in backport proposals. So a fix for a serious bug gets priority attention over a new feature, and a simple proposal can be promoted over one that's time-consuming to review. Just a thought. -- Nick Kew Application Development with Apache - the Apache Modules Book http://www.apachetutor.org/
Re: Enable persistence for SSL connections to the backend for reverse proxy
On Dec 8, 2007, at 9:38 AM, Ruediger Pluem wrote: Thoughts, comments? A *real quick* review But so far, I see no issues (not tested yet though :) ) +1 +static apr_status_t socket_cleanup(proxy_conn_rec *conn) +{ +if (conn-sock) { +apr_socket_close(conn-sock); +conn-sock = NULL; +} +if (conn-connection) { +apr_pool_cleanup_kill(conn-pool, conn, connection_cleanup); +conn-connection = NULL; +} +return APR_SUCCESS; +} + Why not simply void? We don't check the return value and I see no way to return non-success anyway :) +PROXY_DECLARE(apr_status_t) ap_proxy_ssl_connection_cleanup (proxy_conn_rec *conn, + request_rec *r) +{ ... Index: modules/proxy/mod_proxy_http.c +if (is_ssl) { +ap_proxy_ssl_connection_cleanup(backend, r); +} + /* Step One: Determine Who To Connect To */ if ((status = ap_proxy_determine_connection(p, r, conf, worker, backend, I assume all the below is to allow non HTTP to also honor SSL connections, otherwise we'd simply place the function in mod_proxy_http.c and not worry about the API issues... +1 uri, url, proxyname, Index: modules/proxy/mod_proxy.h === --- modules/proxy/mod_proxy.h (revision 601660) +++ modules/proxy/mod_proxy.h (working copy) @@ -483,6 +483,8 @@ PROXY_DECLARE(void) ap_proxy_table_unmerge(apr_pool_t *p, apr_table_t *t, char *key); /* DEPRECATED (will be replaced with ap_proxy_connect_backend */ PROXY_DECLARE(int) ap_proxy_connect_to_backend(apr_socket_t **, const char *, apr_sockaddr_t *, const char *, proxy_server_conf *, server_rec *, apr_pool_t *); +PROXY_DECLARE(apr_status_t) ap_proxy_ssl_connection_cleanup (proxy_conn_rec *conn, + request_rec *r); PROXY_DECLARE(int) ap_proxy_ssl_enable(conn_rec *c); PROXY_DECLARE(int) ap_proxy_ssl_disable(conn_rec *c); PROXY_DECLARE(int) ap_proxy_conn_is_https(conn_rec *c); Index: include/ap_mmn.h === --- include/ap_mmn.h(revision 601660) +++ include/ap_mmn.h(working copy) @@ -143,6 +143,7 @@ * 20071108.2 (2.3.0-dev) Add st and keep fields to struct util_ldap_connection_t * 20071108.3 (2.3.0-dev) Add API guarantee for adding connection filters * with non-NULL request_rec pointer (ap_add_*_filter*) + * 20071108.4 (2.3.0-dev) Add ap_proxy_ssl_connection_cleanup */ #define MODULE_MAGIC_COOKIE 0x41503234UL /* AP24 */ @@ -150,7 +151,7 @@ #ifndef MODULE_MAGIC_NUMBER_MAJOR #define MODULE_MAGIC_NUMBER_MAJOR 20071108 #endif -#define MODULE_MAGIC_NUMBER_MINOR 3/* 0...n */ +#define MODULE_MAGIC_NUMBER_MINOR 4/* 0...n */ /** * Determine if the server's current MODULE_MAGIC_NUMBER is at least a
Re: Memory consumption of mod_substitute
On Dec 7, 2007, at 7:09 AM, Plüm, Rüdiger, VF-Group wrote: -Ursprüngliche Nachricht- Von: [EMAIL PROTECTED] Gesendet: Freitag, 7. Dezember 2007 11:24 An: dev@httpd.apache.org Betreff: Re: Memory consumption of mod_substitute On Dec 5, 2007 8:36 AM, Plüm, Rüdiger, VF-Group [EMAIL PROTECTED] wrote: * My test case lead to the exceptional situation of a very large passbb bucket brigade (about 1,000,000 buckets) as a result of processing 4 MB of the file. So I add a flush bucket once I have more than MAX_BUCKET (1000) buckets in the brigade and pass it down the chain to get it send and the passbb bucket brigade cleaned up and its memory reusable again. Ha! Is there a way we could be more aggressive in minimizing the number of buckets mod_substitute creates? Perhaps using apr_bucket_copy to create ref-counted versions of the replacement string? Possibly in the strmatch case, but in the regexp case the replacement string can be different each time when you use backreferences. Yes, there maybe some optimization potential left here, but this might become tricky with the temporary pools I introduced. Yeppers... plus it makes the code itself much more tricky with nasty paths :)
Re: http://svn.apache.org/viewvc?view=revrevision=602469
On Dec 8, 2007, at 11:39 AM, Ruediger Pluem wrote: The reason why the current code handles FLUSH separately is though, yes, the ap_pass_brigade is done at the end of the while loop, that is *only* done when we're done handling the full brigade... The intent was to honor flushes in the brigade immediately and reset at that point... Not sure why this was removed, since the old way is more efficient. In fact the patch does all this as it passes the passbb brigade down the chain after *each* processed bucket of the original brigade (the ap_pass_brigade is at *the end* of the while loop *not* after the while loop). So we don't process the whole brigade before passing it down the chain which would be indeed insane. I didn't catch that when looking over the actual patch itself. Seeing the actual code in trunk, yeah, this is better. I also see that AP_MIN_BYTES_TO_WRITE is no longer being I saw no need for us to do *any* buffering in the filter. If the data passed down the chain is not reasonable large for sending over the wire the core network filter will take care of this and buffer it until it is reasonable or a flush bucket is seen. used at all... I'm guessing MAX_BUCKETS is designed to replace that?? Again, the idea is to avoid extremely large in-process data sizes :) The MAX_BUCKETS seems The patch does that :-). +1
Re: http://svn.apache.org/viewvc?view=revrevision=602469
On Dec 8, 2007, at 11:39 AM, Ruediger Pluem wrote: In fact the patch does all this as it passes the passbb brigade down the chain after *each* processed bucket of the original brigade (the ap_pass_brigade is at *the end* of the while loop *not* after the while loop). I didn't catch that when looking over the actual patch itself. Seeing the actual code in trunk, yeah, this is better. So we don't process the whole brigade before passing it down the chain which would be indeed insane. In fact flush buckets are handled in the same manner as before the patch as they fall in the if (APR_BUCKET_IS_METADATA(b)) branch and the next code that is executed after the block there is an ap_pass_brigade at the end of the while loop I also see that AP_MIN_BYTES_TO_WRITE is no longer being I saw no need for us to do *any* buffering in the filter. If the data passed down the chain is not reasonable large for sending over the wire the core network filter will take care of this and buffer it until it is reasonable or a flush bucket is seen. used at all... I'm guessing MAX_BUCKETS is designed to replace that?? Again, the idea is to avoid extremely large in-process data sizes :) The MAX_BUCKETS seems The patch does that :-). to be mostly for super-bad cases and not so much for behaving nicely in all cases. (mod_include.c does the same thing, btw) No MAX_BUCKETS should not replace that. It is a safety measure for super bad cases like the example I gave in my initial review mail (http://mail-archives.apache.org/mod_mbox/httpd-dev/200712.mbox/% [EMAIL PROTECTED] MBX11.internal.vodafone.com%3e). Regards Rüdiger
Re: Memory consumption of mod_substitute
On Dec 5, 2007 8:36 AM, Plüm, Rüdiger, VF-Group [EMAIL PROTECTED] wrote: * My test case lead to the exceptional situation of a very large passbb bucket brigade (about 1,000,000 buckets) as a result of processing 4 MB of the file. So I add a flush bucket once I have more than MAX_BUCKET (1000) buckets in the brigade and pass it down the chain to get it send and the passbb bucket brigade cleaned up and its memory reusable again. By the by, even though we just use MAX_BUCKET here, it does seem that there is high potential for naming conflicts for this define... Maybe prefix it with some AP_* junk just in case?
Re: Memory consumption of mod_substitute
On 12/08/2007 07:41 PM, Jim Jagielski wrote: On Dec 5, 2007 8:36 AM, Plüm, Rüdiger, VF-Group [EMAIL PROTECTED] wrote: * My test case lead to the exceptional situation of a very large passbb bucket brigade (about 1,000,000 buckets) as a result of processing 4 MB of the file. So I add a flush bucket once I have more than MAX_BUCKET (1000) buckets in the brigade and pass it down the chain to get it send and the passbb bucket brigade cleaned up and its memory reusable again. By the by, even though we just use MAX_BUCKET here, it does seem that there is high potential for naming conflicts for this define... Maybe prefix it with some AP_* junk just in case? Fixed in r602533. You may want to add it to the backport proposal as well. Regards Rüdiger
Re: Memory consumption of mod_substitute
On Dec 8, 2007, at 2:31 PM, Ruediger Pluem wrote: On 12/08/2007 07:41 PM, Jim Jagielski wrote: On Dec 5, 2007 8:36 AM, Plüm, Rüdiger, VF-Group [EMAIL PROTECTED] wrote: * My test case lead to the exceptional situation of a very large passbb bucket brigade (about 1,000,000 buckets) as a result of processing 4 MB of the file. So I add a flush bucket once I have more than MAX_BUCKET (1000) buckets in the brigade and pass it down the chain to get it send and the passbb bucket brigade cleaned up and its memory reusable again. By the by, even though we just use MAX_BUCKET here, it does seem that there is high potential for naming conflicts for this define... Maybe prefix it with some AP_* junk just in case? Fixed in r602533. You may want to add it to the backport proposal as well. Already done... I updated the rev2 patch in place, since I figured no one had looked at it yet :)
Re: Enable persistence for SSL connections to the backend for reverse proxy
On 12/08/2007 07:31 PM, Jim Jagielski wrote: +static apr_status_t socket_cleanup(proxy_conn_rec *conn) +{ +if (conn-sock) { +apr_socket_close(conn-sock); +conn-sock = NULL; +} +if (conn-connection) { +apr_pool_cleanup_kill(conn-pool, conn, connection_cleanup); +conn-connection = NULL; +} +return APR_SUCCESS; +} + Why not simply void? We don't check the return value and I see no way to return non-success anyway :) I have done so in preparation for the next patch I have in mind to avoid memory leakage (this is not connected to SSL / non SSL). But as this function is not exposed so far I don't really care. I will have a look later if I can convert this to void. Index: modules/proxy/mod_proxy_http.c +if (is_ssl) { +ap_proxy_ssl_connection_cleanup(backend, r); +} + /* Step One: Determine Who To Connect To */ if ((status = ap_proxy_determine_connection(p, r, conf, worker, backend, I assume all the below is to allow non HTTP to also honor SSL connections, otherwise we'd simply place the function in mod_proxy_http.c and not worry about the API issues... +1 From a quick glance I thought that there are functions in proxy_util.c that are only used by mod_proxy_http. But in fact ap_proxy_connection_create is also used by mod_proxy_ftp. I am not sure if ap_proxy_ssl_connection_cleanup could be useful for mod_proxy_ftp. On the other hand if I do not put ap_proxy_ssl_connection_cleanup in proxy_util.c I need to export socket_cleanup from proxy_util.c which really belongs there. So I would have the same API issue :-). I found ap_proxy_determine_connection to be more worth of exposure than socket_cleanup. But finally it doesn't really matter. BTW: I have not tested with FTP proxing so far. Regards Rüdiger
Re: Enable persistence for SSL connections to the backend for reverse proxy
On Dec 8, 2007, at 2:47 PM, Ruediger Pluem wrote: Index: modules/proxy/mod_proxy_http.c +if (is_ssl) { +ap_proxy_ssl_connection_cleanup(backend, r); +} + /* Step One: Determine Who To Connect To */ if ((status = ap_proxy_determine_connection(p, r, conf, worker, backend, I assume all the below is to allow non HTTP to also honor SSL connections, otherwise we'd simply place the function in mod_proxy_http.c and not worry about the API issues... +1 From a quick glance I thought that there are functions in proxy_util.c that are only used by mod_proxy_http. The only concern was in populating the API with stuff that is really local to a specific function or capability... But I think that ap_proxy_ssl_connection_cleanup() being exposed as part of the API will allow possible uses that we can't anticipate now ;)
Re: Enable persistence for SSL connections to the backend for reverse proxy
On Dec 8, 2007, at 2:47 PM, Ruediger Pluem wrote: BTW: I have not tested with FTP proxing so far. By the by, found a few cycles to test the patch as is... so far, I see no issues... So +1 for folding it into -trunk and we'll address anything that may pop up in there :)