new modules in trunk
I've committed a few new modules to trunk tonight: mod_ratelimit: Bandwidth Rate Limiting for Clients. mod_heartbeat: Generates Multicast heartbeat messages containing the status of the server. mod_heartmonitor: Collects these Multicast heartbeats for other modules to use. mod_lbmethod_heartbeat: Module to Load Balance mod_proxy_balancer workers using the data from the heartbeats. If people could take a closer look at mod_lbmethod_heartbeat especially, since it is completely new, and I don't really know the internals of mod_proxy that well, while the other ones are a little more battle tested. Thanks, Paul
Re: Handling AP_FILTER_ERROR
On 12/01/2008 12:42 AM, Nick Kew wrote: On Sun, 30 Nov 2008 18:22:39 -0500 Eric Covener [EMAIL PROTECTED] wrote: On Sun, Nov 30, 2008 at 5:20 PM, Nick Kew [EMAIL PROTECTED] wrote: In practice, the proposed fix looks good, and I want to vote +1. I'm just a little concerned over whether we're in danger of an infinite loop if we feed an AP_FILTER_ERROR into ap_http_header_filter. The filter will just return AP_FILTER_ERROR, and might get re-invoked with the error still there. This is my first real pass through the filters, so please correct me if something looks fishy. Something certainly looks fishy. But it's not your analysis: it's the long-standing confusion over what a filter error looks like. If the filter is ever re-invoked, with the same brigade containing the error bucket, doesn't that mean an earlier filter (or handler) is looping over the same (uncleared) brigade? Sure, it means noone has cleared the brigade. But it is customary to clear a brigade as we consume it, and it's possible something might be assuming that's happening. Looking at ap_http_header_filter, if we encounter an error, we first note it and continue. If we subsequently encounter EOC, we'll return ap_pass_brigade and ignore the error altogether. Otherwise we'll call ap_die (which is a no-op if passed AP_FILTER_ERROR) and then return AP_FILTER_ERROR up the stack, leaving the filter in place. Note that the ap_die() call in this context doesn't pass the filter chain rv (AP_FILTER_ERROR) but the stashed away HTTP error code that was pulled from the error bucket. Yes. But in principle, that error could be AP_FILTER_ERROR, and will be if recursion ever happens. How about, for example, a filter error in a subrequest returning AP_FILTER_ERROR. What does the parent request see? That depends on what the caller of ap_run_sub_req does with the result. I guess no general statement is possible here. This is the call that actually gets us the error response someone has queued up earlier (e.g. LimitRequestBody during the HTTP_IN filter) It's the later call to ap_die, back in ap_process_request, that should see AP_FILTER_ERROR and no-op. This is the return code that the general fix for PR#31759 is catching as non-HTTP and changing to 500. Should is great; I'm worrying about edge-cases. To rephrase: I'd +1 the patch if either: (1) We do as I suggest, and remove the error bucket in ap_http_header_filter, or (2) Someone can convince me there is really no possibility, even in the event of a bug elsewhere, of this recursion. Sorry, but now you confused ME :-). So lets try to sort this out from the start. 1. The run_handler hook might return AP_FILTER_ERROR back to ap_invoke_handler. So AP_FILTER_ERROR might be the final result *after* the handler phase. With the current code ap_invoke_handler would transform AP_FILTER_ERROR into HTTP_INTERNAL_SERVER_ERROR and return this. After the patch ap_invoke_handler would return AP_FILTER_ERROR instead. 2. IMHO ap_invoke_handler is only called in three locations: ap_run_sub_req ap_process_request ap_internal_redirect 2.1 ap_run_sub_req ap_run_sub_req will return HTTP_INTERNAL_SERVER_ERROR (now) or AP_FILTER_ERROR (after the patch) to the caller. It cannot be said what the caller does with this return code as subrequests are done from many location and also by third party modules. 2.2 ap_process_request ap_process_request sets r-status to HTTP_OK (to avoid error page recursion) and calls ap_die with the return code (HTTP_INTERNAL_SERVER_ERROR / AP_FILTER_ERROR). With the current code this produces an error page, after the patch this is a noop. Afterwards the status is never used again. 2.3 ap_internal_redirect Seems like the same case as ap_process_request to me. I fail to see where a recursion could happen here after the patch and where the error buckets come into the game. IMHO the worst case is that a filter returns AP_FILTER_ERROR before any HTTP status code was sent, but does nothing else (like sending an error bucket down the chain), the handler feeding the chain does nothing but returning this code to ap_invoke_handler / ap_internal_redirect and subsequently to ap_process_request after the patch which would cause us to not sent any response at all (which would be also wrong). So ap_die maybe should do the following: if (type == AP_FILTER_ERROR) { if [ap_http_header_filter is still in chain] { log this type = HTTP_INTERNAL_SERVER_ERROR; } else { return; } } Regards Rüdiger
Re: Handling AP_FILTER_ERROR
On 1 Dec 2008, at 08:52, Ruediger Pluem wrote: 2. IMHO ap_invoke_handler is only called in three locations: It's a public API, so could be called by any module. So ap_die maybe should do the following: if (type == AP_FILTER_ERROR) { if [ap_http_header_filter is still in chain] { log this type = HTTP_INTERNAL_SERVER_ERROR; } else { return; } } That looks good, too. But do you see any objection to the (IMHO simpler) fix of removing error buckets as we detect them? -- Nick Kew
Re: new modules in trunk
On 1 Dec 2008, at 08:17, Paul Querna wrote: I've committed a few new modules to trunk tonight: Interesting. Are we expecting docs anytime soon? -- Nick Kew
Re: new modules in trunk
On Mon, Dec 1, 2008 at 11:17 AM, Paul Querna [EMAIL PROTECTED] wrote: mod_heartmonitor: Collects these Multicast heartbeats for other modules to use. Interesting. Is there other modules that create separate thread(s) for bypassing request-response thing? -- С уважением, Кевац Марко Sincerely yours, Kevac Marko
Re: Handling AP_FILTER_ERROR
On 12/01/2008 10:02 AM, Nick Kew wrote: On 1 Dec 2008, at 08:52, Ruediger Pluem wrote: 2. IMHO ap_invoke_handler is only called in three locations: It's a public API, so could be called by any module. Correct. So ap_die maybe should do the following: if (type == AP_FILTER_ERROR) { if [ap_http_header_filter is still in chain] { log this type = HTTP_INTERNAL_SERVER_ERROR; } else { return; } } That looks good, too. But do you see any objection to the (IMHO simpler) fix of removing error buckets as we detect them? Not directly. Could you please propose a concrete patch? It makes discussion easier :-). Regards Rüdiger
Re: Handling AP_FILTER_ERROR
On 1 Dec 2008, at 09:31, Ruediger Pluem wrote: But do you see any objection to the (IMHO simpler) fix of removing error buckets as we detect them? Not directly. Could you please propose a concrete patch? It makes discussion easier :-). I think the one-liner should work, since we don't re-use the error bucket. So against trunk, that's: --- modules/http/http_filters.c (revision 722000) +++ modules/http/http_filters.c (working copy) @@ -1133,6 +1133,7 @@ { if (AP_BUCKET_IS_ERROR(e) !eb) { eb = e-data; +apr_bucket_delete(e); continue; } /* -- Nick Kew
Re: new modules in trunk
Nick Kew wrote: On 1 Dec 2008, at 08:17, Paul Querna wrote: I've committed a few new modules to trunk tonight: Interesting. Are we expecting docs anytime soon? patches welcome, the reademe files aer all I really had :-)
Re: Handling AP_FILTER_ERROR
On 12/01/2008 11:01 AM, Nick Kew wrote: On 1 Dec 2008, at 09:31, Ruediger Pluem wrote: But do you see any objection to the (IMHO simpler) fix of removing error buckets as we detect them? Not directly. Could you please propose a concrete patch? It makes discussion easier :-). I think the one-liner should work, since we don't re-use the error bucket. So against trunk, that's: --- modules/http/http_filters.c (revision 722000) +++ modules/http/http_filters.c (working copy) @@ -1133,6 +1133,7 @@ { if (AP_BUCKET_IS_ERROR(e) !eb) { eb = e-data; +apr_bucket_delete(e); continue; } /* I guess this could segfault (the status is in eb-status). Why not cleaning up the whole brigade? IMHO it is of no use anymore when we do an ap_die and afterwards return AP_FILTER_ERROR; Index: modules/http/http_filters.c === --- modules/http/http_filters.c (Revision 721833) +++ modules/http/http_filters.c (Arbeitskopie) @@ -1145,7 +1145,11 @@ } } if (eb) { -ap_die(eb-status, r); +int status; + +status = eb-status; +apr_brigade_cleanup(b); +ap_die(status, r); return AP_FILTER_ERROR; } Regards Rüdiger
Re: Handling AP_FILTER_ERROR
On 1 Dec 2008, at 10:19, Ruediger Pluem wrote: if (eb) { -ap_die(eb-status, r); +int status; + +status = eb-status; +apr_brigade_cleanup(b); +ap_die(status, r); return AP_FILTER_ERROR; } Good call. +1 to that. -- Nick Kew
Re: Handling AP_FILTER_ERROR
On Mon, Dec 1, 2008 at 6:44 AM, Nick Kew [EMAIL PROTECTED] wrote: On 1 Dec 2008, at 10:19, Ruediger Pluem wrote: if (eb) { -ap_die(eb-status, r); +int status; + +status = eb-status; +apr_brigade_cleanup(b); +ap_die(status, r); return AP_FILTER_ERROR; } Good call. +1 to that. I'll revisit this afternoon, add to the backport, and reset the voting. Thanks all. -- Eric Covener [EMAIL PROTECTED]
Re: svn commit: r721952 - in /httpd/httpd/trunk: ./ modules/ modules/cluster/
On 12/01/2008 03:55 AM, [EMAIL PROTECTED] wrote: Author: pquerna Date: Sun Nov 30 18:55:14 2008 New Revision: 721952 URL: http://svn.apache.org/viewvc?rev=721952view=rev Log: Add two new modules, originally written at Joost, to handle load balancing across multiple apache servers within the same datacenter. mod_heartbeat generates multicast status messages with the current number of clients connected, but the formated can easily be extended to include other things. mod_heartmonitor collects these messages into a static file, which then can be used for other modules to make load balancing decisions on. Added: httpd/httpd/trunk/modules/cluster/ (with props) httpd/httpd/trunk/modules/cluster/Makefile.in (with props) httpd/httpd/trunk/modules/cluster/README.heartbeat httpd/httpd/trunk/modules/cluster/README.heartmonitor httpd/httpd/trunk/modules/cluster/config.m4 httpd/httpd/trunk/modules/cluster/mod_heartbeat.c (with props) httpd/httpd/trunk/modules/cluster/mod_heartmonitor.c (with props) Modified: httpd/httpd/trunk/CHANGES httpd/httpd/trunk/modules/README Modified: httpd/httpd/trunk/CHANGES URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/CHANGES?rev=721952r1=721951r2=721952view=diff == --- httpd/httpd/trunk/CHANGES [utf-8] (original) +++ httpd/httpd/trunk/CHANGES [utf-8] Sun Nov 30 18:55:14 2008 @@ -2,6 +2,12 @@ Changes with Apache 2.3.0 [ When backported to 2.2.x, remove entry from this file ] + *) mod_heartmonitor: New module to collect heartbeats, and write out a file + so that other modules can load balance traffic as needed. [Paul Querna] + + *) mod_heartbeat: New module to genarate multicast heartbeats to konw if a + server is online. [Paul Querna] + s/konw/know/ In addition to the later adjusted svn log message I would propose that you add Sanders and Justins name to this change entry as well. Added: httpd/httpd/trunk/modules/cluster/mod_heartbeat.c URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/cluster/mod_heartbeat.c?rev=721952view=auto == --- httpd/httpd/trunk/modules/cluster/mod_heartbeat.c (added) +++ httpd/httpd/trunk/modules/cluster/mod_heartbeat.c Sun Nov 30 18:55:14 2008 @@ -0,0 +1,354 @@ +/* Licensed to the Apache Software Foundation (ASF) under one or more + * contributor license agreements. See the NOTICE file distributed with + * this work for additional information regarding copyright ownership. + * The ASF licenses this file to You under the Apache License, Version 2.0 + * (the License); you may not use this file except in compliance with + * the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +#include httpd.h +#include http_config.h +#include http_log.h +#include apr_strings.h + +#include ap_mpm.h +#include scoreboard.h + +#ifndef HEARTBEAT_INTERVAL +#define HEARTBEAT_INTERVAL (1) +#endif + +module AP_MODULE_DECLARE_DATA heartbeat_module; + +typedef struct hb_ctx_t +{ +int active; +apr_sockaddr_t *mcast_addr; +int server_limit; +int thread_limit; +int status; +int keep_running; Shouldn't this be volatile? +apr_proc_mutex_t *mutex; +const char *mutex_path; +apr_thread_mutex_t *start_mtx; +apr_thread_t *thread; +apr_file_t *lockf; +} hb_ctx_t; + +static const char *msg_format = v=%uready=%ubusy=%u; + +#define MSG_VERSION (1) + +static int hb_monitor(hb_ctx_t *ctx, apr_pool_t *p) +{ +int i, j; +apr_uint32_t ready = 0; +apr_uint32_t busy = 0; + +for (i = 0; i ctx-server_limit; i++) { +process_score *ps; +ps = ap_get_scoreboard_process(i); + +for (j = 0; j ctx-thread_limit; j++) { +worker_score *ws = NULL; + +ws = ap_scoreboard_image-servers[i][j]; + +int res = ws-status; + +if (res == SERVER_READY ps-generation == ap_my_generation) { +ready++; +} +else if (res != SERVER_DEAD + res != SERVER_STARTING res != SERVER_IDLE_KILL) { +busy++; Is this correct even if ps-generation != ap_my_generation? +} +} +} + +char buf[256]; +apr_size_t len = +apr_snprintf(buf, sizeof(buf), msg_format, MSG_VERSION, ready, busy); + +apr_socket_t *sock = NULL; +do { +apr_status_t rv;
Re: Handling AP_FILTER_ERROR
On Mon, Dec 1, 2008 at 7:33 AM, Eric Covener [EMAIL PROTECTED] wrote: On Mon, Dec 1, 2008 at 6:44 AM, Nick Kew [EMAIL PROTECTED] wrote: On 1 Dec 2008, at 10:19, Ruediger Pluem wrote: if (eb) { -ap_die(eb-status, r); +int status; + +status = eb-status; +apr_brigade_cleanup(b); +ap_die(status, r); return AP_FILTER_ERROR; } Good call. +1 to that. I'll revisit this afternoon, add to the backport, and reset the voting. Committed in r722081 with no change in perl framework. -- Eric Covener [EMAIL PROTECTED]
Re: svn commit: r721965 - in /httpd/httpd/trunk/modules/filters: config.m4 mod_ratelimit.c mod_ratelimit.h
On 12/01/2008 06:12 AM, [EMAIL PROTECTED] wrote: Author: pquerna Date: Sun Nov 30 21:12:22 2008 New Revision: 721965 URL: http://svn.apache.org/viewvc?rev=721965view=rev Log: Add a new module, mod_ratelimit, originally written at Joost, which can rate limit the outgoing bandwidth to a client. Added: httpd/httpd/trunk/modules/filters/mod_ratelimit.c (with props) httpd/httpd/trunk/modules/filters/mod_ratelimit.h (with props) Modified: httpd/httpd/trunk/modules/filters/config.m4 I guess you missed a change entry here. +static apr_status_t +rate_limit_filter(ap_filter_t *f, apr_bucket_brigade *input_bb) +{ +apr_status_t rv = APR_SUCCESS; +rl_ctx_t *ctx = f-ctx; +apr_bucket *fb; +int do_sleep = 0; +apr_bucket_alloc_t *ba = f-r-connection-bucket_alloc; +apr_bucket_brigade *bb = input_bb; + +if (f-c-aborted) { +ap_log_rerror(APLOG_MARK, APLOG_ERR, 0, f-r, rl: conn aborted); +apr_brigade_cleanup(bb); +return APR_ECONNABORTED; +} + +if (ctx == NULL) { + +/* no subrequests. */ +if (f-r-main != NULL) { +ap_remove_output_filter(f); +return ap_pass_brigade(f-next, bb); +} + +const char *rl = apr_table_get(f-r-subprocess_env, rate-limit); + +if (rl == NULL) { +ap_remove_output_filter(f); +return ap_pass_brigade(f-next, bb); +} + +/* first run, init stuff */ +ctx = apr_palloc(f-r-pool, sizeof(rl_ctx_t)); +f-ctx = ctx; +ctx-speed = 0; +ctx-state = RATE_LIMIT; + +/* rl is in kilo bytes / second */ +ctx-speed = atoi(rl) * 1024; + +if (ctx-speed == 0) { +/* remove ourselves */ +ap_remove_output_filter(f); +return ap_pass_brigade(f-next, bb); +} + +/* calculate how many bytes / interval we want to send */ +/* speed is bytes / second, so, how many (speed / 1000 % interval) */ +ctx-chunk_size = (ctx-speed / (1000 / RATE_INTERVAL_MS)); +ctx-tmpbb = apr_brigade_create(f-r-pool, ba); +ctx-holdingbb = apr_brigade_create(f-r-pool, ba); +} + +while (ctx-state != RATE_ERROR + (!APR_BRIGADE_EMPTY(bb) || !APR_BRIGADE_EMPTY(ctx-holdingbb))) { +apr_bucket *e; + +if (!APR_BRIGADE_EMPTY(ctx-holdingbb)) { +APR_BRIGADE_CONCAT(bb, ctx-holdingbb); +apr_brigade_cleanup(ctx-holdingbb); +} + +while (ctx-state == RATE_FULLSPEED !APR_BRIGADE_EMPTY(bb)) { +/* Find where we 'stop' going full speed. */ +for (e = APR_BRIGADE_FIRST(bb); + e != APR_BRIGADE_SENTINEL(bb); e = APR_BUCKET_NEXT(e)) { +if (RL_BUCKET_IS_END(e)) { +apr_bucket *f; +f = APR_RING_LAST(bb-list); +APR_RING_UNSPLICE(e, f, link); +APR_RING_SPLICE_TAIL(ctx-holdingbb-list, e, f, + apr_bucket, link); +ctx-state = RATE_LIMIT; +break; +} +} + +if (f-c-aborted) { +apr_brigade_cleanup(bb); +ctx-state = RATE_ERROR; +break; +} + +fb = apr_bucket_flush_create(ba); +APR_BRIGADE_INSERT_TAIL(bb, fb); +rv = ap_pass_brigade(f-next, bb); + +if (rv != APR_SUCCESS) { +ctx-state = RATE_ERROR; +ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, f-r, + rl: full speed brigade pass failed.); +} +} + +while (ctx-state == RATE_LIMIT !APR_BRIGADE_EMPTY(bb)) { +for (e = APR_BRIGADE_FIRST(bb); + e != APR_BRIGADE_SENTINEL(bb); e = APR_BUCKET_NEXT(e)) { +if (RL_BUCKET_IS_START(e)) { +apr_bucket *f; +f = APR_RING_LAST(bb-list); +APR_RING_UNSPLICE(e, f, link); +APR_RING_SPLICE_TAIL(ctx-holdingbb-list, e, f, + apr_bucket, link); +ctx-state = RATE_FULLSPEED; +break; +} +} + +while (!APR_BRIGADE_EMPTY(bb)) { +apr_bucket *stop_point; +apr_off_t len = 0; + +if (f-c-aborted) { +apr_brigade_cleanup(bb); +ctx-state = RATE_ERROR; Shouldn't we do a break or continue here? +} + +if (do_sleep) { +apr_sleep(RATE_INTERVAL_MS * 1000); +} +else { +do_sleep = 1; +} + +apr_brigade_length(bb, 1, len); +
Re: new modules in trunk
Den Monday 01 December 2008 09:17:24 skrev Paul Querna: I've committed a few new modules to trunk tonight: mod_ratelimit: Bandwidth Rate Limiting for Clients. mod_heartbeat: Generates Multicast heartbeat messages containing the status of the server. mod_heartmonitor: Collects these Multicast heartbeats for other modules to use. mod_lbmethod_heartbeat: Module to Load Balance mod_proxy_balancer workers using the data from the heartbeats. If people could take a closer look at mod_lbmethod_heartbeat especially, since it is completely new, and I don't really know the internals of mod_proxy that well, while the other ones are a little more battle tested. Thanks, Paul Apache 2.2.10 segfaults if either/or mod_heartbeat/mod_heartmonitor is unconfigured and/or has no access to the logs directory. Hard to say in which combination. Well..., heh..., I wanted to try this on Mandriva. When configured and has access to the logs directory it does not segfault. Just thought I should mention this despite the modules is from trunk. -- Regards // Oden Eriksson
Re: new modules in trunk
Kevac Marko wrote: On Mon, Dec 1, 2008 at 11:17 AM, Paul Querna [EMAIL PROTECTED] wrote: mod_heartmonitor: Collects these Multicast heartbeats for other modules to use. Interesting. Is there other modules that create separate thread(s) for bypassing request-response thing? Considered doing this for the data vs. control channel of mod_ftp, but it's essentially not necessary and introduces additional overhead to set up and tear down threads during operation. A thread pool was an alternative, but in the end, it looked too expensive to consider.
Re: new modules in trunk
William A. Rowe, Jr. wrote: Kevac Marko wrote: On Mon, Dec 1, 2008 at 11:17 AM, Paul Querna [EMAIL PROTECTED] wrote: mod_heartmonitor: Collects these Multicast heartbeats for other modules to use. Interesting. Is there other modules that create separate thread(s) for bypassing request-response thing? Considered doing this for the data vs. control channel of mod_ftp, but it's essentially not necessary and introduces additional overhead to set up and tear down threads during operation. A thread pool was an alternative, but in the end, it looked too expensive to consider. For beartbeat/heartmonitor, originally, I wrote them to spawn a thread off the master process, but code running as root sucks, and if there were any problems like it crashing, it would fubar up the entire httpd process chain. Later I moved it to using a thread on each worker process, since if it crashes, I still have other workers, and now the code is running in the child without privileges. If the core fully supported UDP and multicast listeners, it should be possible to write the heartmonitor module as just a protocol module, although heartbeat would still likely need its current structure. -Paul
Re: svn commit: r721987 - in /httpd/httpd/trunk: CHANGES modules/proxy/config.m4 modules/proxy/mod_lbmethod_heartbeat.c
On 12/01/2008 08:25 AM, [EMAIL PROTECTED] wrote: Author: pquerna Date: Sun Nov 30 23:25:11 2008 New Revision: 721987 URL: http://svn.apache.org/viewvc?rev=721987view=rev Log: Add a new module to read in the heartbeat file and do load balancing for mod_proxy based upon it. Added: httpd/httpd/trunk/modules/proxy/mod_lbmethod_heartbeat.c (contents, props changed) - copied, changed from r721944, httpd/httpd/trunk/modules/proxy/examples/mod_lbmethod_rr.c Modified: httpd/httpd/trunk/CHANGES httpd/httpd/trunk/modules/proxy/config.m4 Copied: httpd/httpd/trunk/modules/proxy/mod_lbmethod_heartbeat.c (from r721944, httpd/httpd/trunk/modules/proxy/examples/mod_lbmethod_rr.c) URL: http://svn.apache.org/viewvc/httpd/httpd/trunk/modules/proxy/mod_lbmethod_heartbeat.c?p2=httpd/httpd/trunk/modules/proxy/mod_lbmethod_heartbeat.cp1=httpd/httpd/trunk/modules/proxy/examples/mod_lbmethod_rr.cr1=721944r2=721987rev=721987view=diff == + +static proxy_worker *find_best_hb(proxy_balancer *balancer, + request_rec *r) { +apr_status_t rv; int i; +apr_uint32_t openslots = 0; proxy_worker *worker; +hb_server_t *server; +apr_array_header_t *up_servers; proxy_worker *mycandidate = NULL; -int checking_standby; -int checked_standby; -rr_data *ctx; - -ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server, - proxy: Entering roundrobin for BALANCER %s (%d), - balancer-name, (int)getpid()); - -/* The index of the candidate last chosen is stored in ctx-index */ -if (!balancer-context) { -/* UGLY */ -ctx = apr_pcalloc(r-server-process-pconf, sizeof(rr_data)); -balancer-context = (void *)ctx; -ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server, - proxy: Creating roundrobin ctx for BALANCER %s (%d), - balancer-name, (int)getpid()); -} else { -ctx = (rr_data *)balancer-context; -} -ap_log_error(APLOG_MARK, APLOG_DEBUG, 0, r-server, - proxy: roundrobin index: %d (%d), - ctx-index, (int)getpid()); - -checking_standby = checked_standby = 0; -while (!mycandidate !checked_standby) { -worker = (proxy_worker *)balancer-workers-elts; +apr_pool_t *tpool; +apr_hash_t *servers; -for (i = 0; i balancer-workers-nelts; i++, worker++) { -if (i ctx-index) -continue; -if ( (checking_standby ? !PROXY_WORKER_IS_STANDBY(worker) : PROXY_WORKER_IS_STANDBY(worker)) ) -continue; -if (!PROXY_WORKER_IS_USABLE(worker)) -ap_proxy_retry_worker(BALANCER, worker, r-server); -if (PROXY_WORKER_IS_USABLE(worker)) { -mycandidate = worker; -break; +lb_hb_ctx_t *ctx = +ap_get_module_config(r-server-module_config, + lbmethod_heartbeat_module); + +apr_pool_create(tpool, r-pool); + +servers = apr_hash_make(tpool); + +rv = read_heartbeats(ctx-path, servers, tpool); + +if (rv) { +ap_log_rerror(APLOG_MARK, APLOG_ERR, rv, r, + lb_heartbeat: Unable to read heartbeats at '%s', + ctx-path); +apr_pool_destroy(tpool); +return NULL; +} + +up_servers = apr_array_make(tpool, apr_hash_count(servers), sizeof(hb_server_t *)); + +for (i = 0; i balancer-workers-nelts; i++) { +worker = APR_ARRAY_IDX(balancer-workers, i, proxy_worker); +server = apr_hash_get(servers, worker-hostname, APR_HASH_KEY_STRING); Hm, the hash uses IP addresses as keys whereas worker-hostname could be name. + +if (!server) { +continue; +} + +if (!PROXY_WORKER_IS_USABLE(worker)) { +ap_proxy_retry_worker(BALANCER, worker, r-server); +} + +if (PROXY_WORKER_IS_USABLE(worker)) { +server-worker = worker; +if (server-seen 10) { Hm. Hardcoded 10. This seems to be ugly. +openslots += server-ready; +APR_ARRAY_PUSH(up_servers, hb_server_t *) = server; } } Regards Rüdiger
Re: svn commit: r721987 - in /httpd/httpd/trunk: CHANGES modules/proxy/config.m4 modules/proxy/mod_lbmethod_heartbeat.c
Ruediger Pluem wrote: +for (i = 0; i balancer-workers-nelts; i++) { +worker = APR_ARRAY_IDX(balancer-workers, i, proxy_worker); +server = apr_hash_get(servers, worker-hostname, APR_HASH_KEY_STRING); Hm, the hash uses IP addresses as keys whereas worker-hostname could be name. Yeah, DNS is bad, don't use it... :-) I don't know an easy way to solve this, since the worker struct doesn't contain the resolved IP, and resolving the IPs of all the workers is a very bad and potentially very slow thing. Thoughts?
Re: AuthzMergeRules blocks everything in default configuration
IIRC, trunk contains (or contained) a security problem with regard to backward compatibility with 2.x configs. I won't consider it releasable until that has been fixed one way or another, and I can't tell from this mail thread whether the actual fix was committed or not. I thought that Chris committed the patches indicated below, but there is no SatisfySections config directive in trunk and AuthzMergeRules Off still appears in docs/conf/httpd.conf:162-167 even though it doesn't appear to be a valid config directive either. Note that the global (unconfigured) default must be Off in order to stay in 2.x. The docs seem to indicate this is now MergeAuthz Off and is off by default? Is that true in the code? The code seems to confirm that conf-op = AUTHZ_LOGIC_OFF; is the default until one of the Match* containers is used (why those directive names are Match* instead of AuthMatch* boggles my mind). What is the conclusion to this thread? Why are all the Authz directives given random names? Am I the only one that finds this feature set impossible to follow? Roy On Oct 29, 2008, at 3:17 PM, Chris Darroch wrote: Dan Poirier wrote: I like the idea of replacing ON with AND and OR. It would not only provide more control, but make it explicit what kind of merging was going to happen. I have mixed thoughts about changing the default to OFF. Cons: That would mean every container directive would have to specify some sort of access control (or at least AuthzMergeRules AND) or it'd be wide open, right? I don't think so; at least, that's not what I was intending. Rather, something much like 2.2's behaviour: containers that don't specify any authz are simply protected by the nearest container merged ahead of them that does specify authz. I'm hoping to put this thread to bed shortly with the patches available here: http://people.apache.org/~chrisd/patches/httpd_authnz_configs/ My intent is to finish up the necessary documentation changes and get everything committed to trunk in the next few days. (Fingers crossed!) In the meantime, an overview follows. Many, many thanks are due to Brad Nicholes, whose massive refactoring of the authn/z system makes all of this work possible. 1) Limit and LimitExcept are made nestable, with an error in the case where all methods are configured out. There are also some tuneups related to Limit/LimitExcept being intended to contain authz configurations only and to not be functional outside Directory/ Location/etc. 2) A setting of AuthType None is allowed, which sets ap_auth_type () to NULL and thus provides a way to turn off authentication for a sub-directory. This corresponds to several convenient ways in 2.4 to turn off authorization, including Require all granted (and, at a deeper level, the new SatisfySections Off). 3) The mod_authz_core.c module is rewritten to attempt to deal with the issues discussed on this thread and the previous one, as well as those described at the end of this email. The authz_provider_list two-pronged linked lists are replaced by a tree structure that mirrors what is configured via SatisfyAll and SatisfyAny. A pair of negative authz containers are introduced, SatisfyNotAll and SatisfyNotAny, which negate their operands in the same manner as Reject. Thus we have the following table: RequireA Reject !A SatisfyAll (A B ...) SatisfyAny (A || B || ...) SatisfyNotAll !(A B ...) SatisfyNotAny !(A || B || ...) The SatisfyAny directive is renamed from SatisfyOne so as not to imply XOR-like functionality (requiring exactly one successful operand). A number of configuration-time checks are implemented to warn administrators regarding redundant or non-functional authz configurations. In particular, since the negative authz directives can not contribute meaningfully to OR-like blocks, as they can only supply neutral (AUTHZ_NEUTRAL) or false (AUTHZ_DENIED) values, they are simply not allowed in these containers. (The code should support them, though, if this check is ever removed.) Similarly, AND-like blocks without only negative authz directives also produce a configuration-time error. The MergeAuthzRules directive is renamed SatisfySections and take three possible values, Off, All, and And. The default is Off, meaning that as directory configuration sections are merged, new authz configurations replace previously merged ones. However, a directory section may specify SatisfySections All to force its predecessor's authz to be successful as well as its own. The SatisfySections Any option permits either the predecessor or current section's authz to grant the user access. Note that the setting of SatisfySections continues to be local only to the directory section it appears in; it is not inherited to subsequent
Re: AuthzMergeRules blocks everything in default configuration
Roy T. Fielding wrote: IIRC, trunk contains (or contained) a security problem with regard to backward compatibility with 2.x configs. I won't consider it releasable until that has been fixed one way or another, and I can't tell from this mail thread whether the actual fix was committed or not. This posting might answer some of your questions: http://marc.info/?l=apache-httpd-devm=122573959206980w=2 Yes, the fix was committed, and the current intention is that 2.2 configurations should be useable as-is with trunk, without changes. I don't think I can *promise* that's working as intended, but that's the idea; if you encounter bugs, please report them! AuthzMergeRules Off still appears in docs/conf/httpd.conf:162-167 even though it doesn't appear to be a valid config directive either. This should be gone in httpd.conf.in; there's no httpd.conf in SVN. Is there any chance you need to refresh from SVN trunk? The docs seem to indicate this is now MergeAuthz Off and is off by default? Is that true in the code? Yes, that's the intention; again, please report bugs or backwards- compatibility problems. (why those directive names are Match* instead of AuthMatch* boggles my mind). What is the conclusion to this thread? Why are all the Authz directives given random names? Am I the only one that finds this feature set impossible to follow? Please see the final set of comments in the posting I linked to above, and the patch in that posting as well. Just to reiterate, what I wrote there was: Finally there was a certain amount of bike-shed re-painting in the form of renaming configuration directives. I settled on Match, MatchAll, etc. based on Eric Covener's comments. If people dislike those names, please jump in and change them. Or if most folks think we'd be better off without authz containers entirely, please vote for the following patch, which simply turns all that stuff off, leaving (I hope) a fairly clean core authz implementation that supports default 2.2-style behaviour and is extensible down the road, should that be desired. Chris. -- GPG Key ID: 366A375B GPG Key Fingerprint: 485E 5041 17E1 E2BB C263 E4DE C8E3 FA36 366A 375B
Re: AuthzMergeRules blocks everything in default configuration
On Mon, Dec 1, 2008 at 6:16 PM, Chris Darroch [EMAIL PROTECTED] wrote: Finally there was a certain amount of bike-shed re-painting in the form of renaming configuration directives. I settled on Match, MatchAll, etc. based on Eric Covener's comments. I had meant iif containers are used, I'd like their name to communicate the require or reject part while the authz providers would be match-like (because the Require on the inside is confusing when surrounted by all the variations) I do agree that all the way down the authz/aa nature of the directives/containers needs to remain clear. I haven't been able to get past the negaqted containers to revisit how this all works in trunk. -- Eric Covener [EMAIL PROTECTED]
Re: AuthzMergeRules blocks everything in default configuration
On Dec 1, 2008, at 3:16 PM, Chris Darroch wrote: Roy T. Fielding wrote: IIRC, trunk contains (or contained) a security problem with regard to backward compatibility with 2.x configs. I won't consider it releasable until that has been fixed one way or another, and I can't tell from this mail thread whether the actual fix was committed or not. This posting might answer some of your questions: http://marc.info/?l=apache-httpd-devm=122573959206980w=2 Yes, the fix was committed, and the current intention is that 2.2 configurations should be useable as-is with trunk, without changes. I don't think I can *promise* that's working as intended, but that's the idea; if you encounter bugs, please report them! AuthzMergeRules Off still appears in docs/conf/httpd.conf:162-167 even though it doesn't appear to be a valid config directive either. This should be gone in httpd.conf.in; there's no httpd.conf in SVN. Is there any chance you need to refresh from SVN trunk? Urgh, no, just a leftover from an old in-tree test. Never mind. The docs seem to indicate this is now MergeAuthz Off and is off by default? Is that true in the code? Yes, that's the intention; again, please report bugs or backwards- compatibility problems. (why those directive names are Match* instead of AuthMatch* boggles my mind). What is the conclusion to this thread? Why are all the Authz directives given random names? Am I the only one that finds this feature set impossible to follow? Please see the final set of comments in the posting I linked to above, and the patch in that posting as well. Just to reiterate, what I wrote there was: Finally there was a certain amount of bike-shed re-painting in the form of renaming configuration directives. I settled on Match, MatchAll, etc. based on Eric Covener's comments. If people dislike those names, please jump in and change them. Or if most folks think we'd be better off without authz containers entirely, please vote for the following patch, which simply turns all that stuff off, leaving (I hope) a fairly clean core authz implementation that supports default 2.2-style behaviour and is extensible down the road, should that be desired. But we are already using *Match all over the place to indicate the use of regex matching. :( I'll think about changing the names. Thanks for committing the important fix. Roy
Re: new modules in trunk
Paul Querna wrote: ... If the core fully supported UDP and multicast listeners, it should be possible to write the heartmonitor module as just a protocol module, although heartbeat would still likely need its current structure. ... Issac Goldstand already developed UDP support, for the contributed mod_dns (both financed by my company, and contributed to the ASF). I believe that it's finally the time to add these things (together with mod_ftp) officially to the trunk, at least as experimental. Their place is there, and as low level modules, it's much more native to include them than some of mentioned modules. The current server, which was an HTTP daemon in the past, already supports various protocols (HTTPS, FTP in the proxy, etc.), si FTP and DNS will be great, especially when it's so hard to support them as external modules, contrary to some of the mentioned modules. Adding the SMTP module (as experimental, because it still needs many fixes) may close the circle and makes APACHE an all-in-one server, when many features (such as buckets-brigades, configuration, pools, MPM's, etc.) are reused in all of the protocols, and make APACHE a very elegant and smart server. -- Eli Marmor [EMAIL PROTECTED] CEO, Netmask (El-Mar) Internet Technologies Ltd. __ Tel.: +972-9-766-1020 8 Yad-Harutzim St. Fax.: +972-9-766-1314 P.O.B. 7004 Mobile: +972-50-5237338 Kfar-Saba 44641, Israel