Re: CLOSE_WAIT problem

2012-05-25 Thread Andrew Oliver
You need to specify a pool size.  Also ask on the users list.

Thanks,

Andrew C. Oliver

On Fri, May 25, 2012 at 5:21 AM, Bongjae Chang  wrote:
> Hi Reindl,
>
> Thank you for the link.
>
> When I read the link, it also said "Faulty scenarios would be like
> filedescriptor leak, server not being
> execute close() on socket leading to pile up of close_wait sockets".
>
> I raised the very same issue(mod_proxy's backend connection which has been
> already closed).
>
> Thanks!
>
> Regards,
> Bongjae Chang
>
>
>
>
> On 5/25/12 6:01 PM, "Reindl Harald"  wrote:
>
>>
>>
>>Am 25.05.2012 10:52, schrieb Bongjae Chang:
> Is the watcher thread which is going through all of the connections
> looking to see if they have been closed by the peer only solution?

 There is no thread.
>>>
>>> I see.. then I think that it will be useful if mod_proxy will support
>>>the
>>> feature later(just my opinion).
>>
>>as far as i understand this is normal TCP behavior for reusing sockets
>>
>>http://blogs.technet.com/b/janelewis/archive/2010/03/09/explaining-close-w
>>ait.aspx
>>
>
>


Re: mod_proxy_html

2011-10-14 Thread Andrew Oliver
On Fri, Oct 14, 2011 at 7:05 AM, Greg Ames  wrote:
>
>
> On Fri, Oct 14, 2011 at 4:44 AM, Nick Kew  wrote:
>>
>> If we include it in 2.4 then we're making libxml2 a dependency.
>> That's of concern primarily to packagers,
>
> I doubt that I could find libxml2 for z/OS.  I vote for keeping it a
> separate module.

http://xmlsoft.org/news.html

"
build fixes: Cygwin portability fixes (Gerrit P. Haase), calling
convention problems on Windows (Marcus Boerger), cleanups based on
Linus' sparse tool, update of win32/configure.js (Rob Richards),
remove warnings on Windows(Marcus Boerger), compilation without SAX1,
detection of the Python binary, use $GCC inestad of $CC = 'gcc'
(Andrew W. Nosenko), compilation/link with threads and old gcc,
compile problem by C370 on Z/OS,
"

By google, It appears as if it has been compiled before on z/OS


>
>>
>> but it's a decision
>> we should at least take consciously.  That's why my initial
>> proposal made it a separate module!
>
> Greg
>


Re: mod_proxy_html

2011-10-13 Thread Andrew Oliver
nonbinding +1 for it being bundled.  That makes life a tad easier in
various environments where I have to configure mod_proxy...

On Thu, Oct 13, 2011 at 1:54 AM, Rainer Jung  wrote:
> On 12.10.2011 23:56, Nick Kew wrote:
>>
>> On 10 Oct 2011, at 23:02, Nick Kew wrote:
>>
>>> Any interest?
>>
>> Looks like a lazy consensus in favour!
>
> If you ant it a bit less lazy: +1 from me also.
>
>> Regarding IP, it's mine to sign over, so that's straightforward.
>> So I guess it's just a matter of going through the administrivia:
>>  - create the subproject in svn
>>  - relicense
>>  - produce documentation in our format
>>  - write a README
>>  - what have I missed that needs to happen up-front?
>>
>> I'm laptop-bound now and for another week, so the hurdle to action
>> is higher than it would normally be.  So I'll leave that time for any
>> objections before attempting to do anything.  In practice that means
>> target date for action is the weekend of Oct 22/23.
>
> I had the impression Jim was +1 for bundling in 2.4, and you are
> planning it as a separate module like mod_ftp or mod_fcgid.
>
> I'm neutral on whether to bundle or not. We would need to handle the
> dependecies like for some other module when bundling, i.e. disabling the
> build if the deps are not found.
>
> Regards,
>
> Rainer
>
>


Re: [PATCH 51489] ProxyPassReverse issue + patch

2011-07-08 Thread Andrew Oliver
This seems more like a job for mod_rewrite to me.
On Jul 8, 2011 12:32 PM, "Micha Lenk"  wrote:
> Hi Apache developers,
>
> I'm using Apache as a reverse proxy in a simple load balancer setup.
> I use ProxyPassReverse in order to re-write the backend server name in
> HTTP redirections (ie. in the Location header of the HTTP response).
> My configuration for the virtual host essentially looks like this:
>
> 
> BalancerMember http://server-1.local status=-SE
> BalancerMember http://server-2.local status=-SE
> 
> 
> ServerName frontend.local
>
> 
> ProxyPass balancer://196f045aca6adc82a0b6eea93ed286a1/
> ProxyPassReverse balancer://196f045aca6adc82a0b6eea93ed286a1/
> 
> 
>
> Now, I was wondering why redirects get an additional slash between the
> server name and the path. Example:
>
> 1. The backend server redirects the URL http://server-1.local/foo to
> the URL http://server-1.local/foo/ because this is actually a
> directory (so far no issue)
>
> 2. With the configuration above the reverse proxy redirects the URL
> http://frontend.local/foo to http://frontend.local//foo/
>
> What I bother about is the additional slash before '/foo/', so I digged
> into the source code and found the following lines in
> modules/proxy/proxy_util.c:
>
> PROXY_DECLARE(const char *) ap_proxy_location_reverse_map(request_rec *r,
> proxy_dir_conf *conf, const char *url)
> {
> [...]
> l1 = strlen(url);
> [...]
> for (i = 0; i < conf->raliases->nelts; i++) {
> if (ap_proxy_valid_balancer_name((char *)real, 0) &&
> (balancer = ap_proxy_get_balancer(r->pool, sconf, real))) {
> int n, l3 = 0;
> proxy_worker **worker = (proxy_worker
> **)balancer->workers->elts;
> const char *urlpart = ap_strchr_c(real, '/');
> if (urlpart) {
> if (!urlpart[1])
> urlpart = NULL;
> else
> l3 = strlen(urlpart);
> }
> for (n = 0; n < balancer->workers->nelts; n++) {
> l2 = strlen((*worker)->s->name);
> if (urlpart) {
> /* urlpart (l3) assuredly starts with its own '/' */
> if ((*worker)->s->name[l2 - 1] == '/')
> --l2;
> if (l1 >= l2 + l3
> && strncasecmp((*worker)->s->name, url, l2)
> == 0
> && strncmp(urlpart, url + l2, l3) == 0) {
> u = apr_pstrcat(r->pool, ent[i].fake, &url[l2 +
> l3],
> NULL);
> return ap_construct_url(r->pool, u, r);
> }
> }
> else if (l1 >= l2 && strncasecmp((*worker)->s->name,
> url, l2) == 0) {
> u = apr_pstrcat(r->pool, ent[i].fake, &url[l2], NULL);
> return ap_construct_url(r->pool, u, r);
> }
> worker++;
> }
> [...]
>
> Right now I don't really understand the reason for the special casing of
> urlpart == "/" in modules/proxy/proxy_util.c, lines 1126 to 1129 (SVN
> rev. 1144374). If urlpart == "/", then the code in lines 1151 and 1152
> gets executed, which seems to add the slash.
>
> I tried to remove the special casing (see submitted patch), and
> apparently the removal fixes the issue.
>
> Does anybody know the reason for the special casing mentioned above?
> If not I want to suggest to commit my patch.
>
> Regards,
> Micha


Re: id=51247 Enhance mod_proxy and _balancer with worker status flag to only accept sticky session routes

2011-05-24 Thread Andrew Oliver
Let me know if you need help testing.  I likely can round up some
volunteers as this is something frequently asked about for my clients.

-Andy

On Tue, May 24, 2011 at 10:37 AM, Keith Mashinter  wrote:
> Thanks, understood, I'll attach a patch to Bugzilla for the trunk as well in
> the next day or two.
>
> \|/- Keith Mashinter
> kmash...@yahoo.com
> 
> From: Jim Jagielski 
> To: dev@httpd.apache.org; Keith Mashinter 
> Sent: Tuesday, May 24, 2011 10:18:04 AM
> Subject: Re: id=51247 Enhance mod_proxy and _balancer with worker status
> flag to only accept sticky session routes
>
> I like the concept... will review.
>
> PS: Most patches should be against trunk. We fold into trunk,
>     test and only then propose for backport for 2.2.x
>
>
> On May 23, 2011, at 3:10 PM, Keith Mashinter wrote:
>
>> I've added a patch to the proxy/balancer to allow for route-only workers
>> are only enabled for sticky session routes, allowing for an even more
>> graceful fade-out of a server than making its lbfactor=1 compared to
>> lbfactor=100 for others.
>>
>> Please reply/vote if you also think it's useful.
>>
>> https://issues.apache.org/bugzilla/show_bug.cgi?id=51247
>> This enhancement, actually SVN Patched against 2.2.19, provides a worker
>> status
>> flag to set a proxy worker as only accepting requests with sticky session
>> routes, e.g. only accept requests with a .route such as Cookie
>> JSESSIONID=xxx.tc2.
>>
>> This allows for a graceful fade-out of servers when their sessions are
>> removed;
>> they continue to receive requests for their sticky session routes but are
>> passed over for requests with no specified route, just as if they were
>> disabled.  In other words, route-only workers are only enabled for sticky
>> session routes.
>>
>> Intended use (Tomcat JSESSIONID noted here but could be PHPSESSIONID, Ruby
>> _session_id, or anything with cookie or request-parameter based session
>> ids):
>> 1. An Apache rev-proxy running for multiple Tomcats.
>> 2. To fade-out a Tomcat for maintenance, set route-only enabled in
>>  the
>>
>> balancer-manager or reload the configuration with the worker status +R.
>> (This depends on Tomcat web-apps delete session cookies, see further
>> below.)
>> 3. Check on the balancer-manager or its Tomcat worker even a few minutes /
>> hours, and when it seems to have completed old sessions you can mark it
>> fully
>> disabled.
>> 4. Once done maintenance, you can then set route-only disabled (status -R)
>> and
>> fully enable the worker again.
>>
>> To delete a JSESSIONID Cookie from a Servlet, you need to specify the same
>> Domain and Path as the original Cookie and setMaxAge=0 as in the typical
>> example below but you should check on your own Domain and Path when a
>> Cookie is
>> created, e.g. watch the Cookie headers in Firefox Firebug.
>>
>>    // To delete a Cookie setMaxAge(0) and also any original domain and
>> path if
>> specified.
>>    Cookie ck = new
>>  Cookie("JSESSIONID", null);
>>
>>    //ck.setDomain("");
>>    ck.setPath(request.getContextPath());
>>    ck.setMaxAge(0);
>>    response.addCookie(ck);
>>
>> \|/- Keith Mashinter
>> kmash...@yahoo.com
>
>
>
>


Re: [PATCH] Add TLS-SRP (RFC 5054) support to mod_ssl

2011-04-17 Thread Andrew Oliver
This is excellent news!
On Apr 17, 2011 5:48 PM, "Quinn Slack"  wrote:
> Posted at: https://issues.apache.org/bugzilla/show_bug.cgi?id=51075
>
> TLS-SRP (RFC 5054)[1] is an implementation of the Secure Remote Password
> (SRP)[2] protocol as a key exchange method for TLS. It uses a shared
secret
> derived from a user's password to supplement or replace third-party
> certificates in authenticating a TLS connection.
>
> This patch adds TLS-SRP support to mod_ssl, adds two new directives
> (SSLSRPVerifierFile and SSLSRPUnknownUserSeed), adds two new SSL env vars
> (SSL_SRP_USER and SSL_SRP_USERINFO), and includes basic documentation.
>
> The TLS-SRP-specific code uses preprocessor guards on OPENSSL_NO_SRP and
is
> enabled only if OpenSSL >= 1.0.1, which is the first version of OpenSSL
that
> will include SRP support[3].
>
> To use this patch:
> (1) install OpenSSL 1.0.1;
> (2) create an OpenSSL SRP verifier (passwd) file with `openssl srp
-srpvfile
> passwd.srpv -add username`;
> (3) specify this file in the server config with: SSLSRPVerifierFile
> /path/to/passwd.srpv
> (4) optionally, for easier testing, force the use of SRP: SSLCipherSuite
> "!DSS:!aRSA:SRP"
>
> To test the TLS-SRP functionality, use gnutls-cli or a version of cURL
with
> TLS-SRP support:
>
> gnutls-cli --srpusername user --srppasswd secret host
> curl --tlsuser user --tlspassword secret -k https://host
>
> TLS-SRP support for Apache is already provided by mod_gnutls[4]. Now that
PAKE
> patents have expired and the security of CAs is increasingly being
doubted,
> TLS-SRP is gaining wider acceptance. GnuTLS, mod_gnutls, and TLSLite have
> supported it for years; cURL since February; OpenSSL will support it in
the
> next release; and I have also assembled patches[5] for Chrome, Firefox,
and
> NSS.
>
> This patch was originally created by Christophe Renou and Peter Sylvester
of
> EdelWeb. I updated it to work with Apache 2's mod_ssl.
>
> Bugzilla entry: https://issues.apache.org/bugzilla/show_bug.cgi?id=51075
> Patch: https://issues.apache.org/bugzilla/attachment.cgi?id=26892
>
>
> [1] http://tools.ietf.org/html/rfc5054
> [2] http://srp.stanford.edu/
> [3] http://cvs.openssl.org/chngview?cn=20484
> [4] http://trustedhttp.org/wiki/TLS-SRP_in_Apache_mod_gnutls
> [5] http://trustedhttp.org/


Re: Theory on recent Phoronix benchmark?

2011-04-05 Thread Andrew Oliver
That's an interesting point.  The reason this peaked my interest is it isn't
really in line with my last JVM benchmarking (granted some time ago).
Performance degradation was a factor in particular when I didn't increase
the heap size a little, but I've not seen  this level of degradation.
Having enjoyed the pleasure of 16-32bit thunking when I "got" to write an
OS/2 device driver a good bit back, I rather like your theory for that.  I
wrote the Phoronix dude(ette/s) and if (he/she/it/they) don't reply I'll
take a crack at it on 11.04.

Thanks,

Andy

On Tue, Apr 5, 2011 at 5:58 PM, William A. Rowe Jr. wrote:

> On 4/5/2011 3:52 PM, Stefan Fritsch wrote:
> > On Tuesday 05 April 2011, Andrew Oliver wrote:
> >> That is just the thing.  Other things that should have been
> >> similarly affected in the benchmark were not.  Take a gander if
> >> you would at some of the rest of that article...
> >
> > HTTPD uses lots of pointers when handling per-dir and per-module
> > configuration data. I agree with Bill that the 2x size increase in
> > pointers is likely a major performance factor. Maybe the other
> > workloads don't use so many pointers. They don't have a java benchmark
> > AFAICS, which should be similarily affected.
> >
> > Or it is just bad luck that with 32bit, HTTPD's working set just fits
> > into some cache while with 64bit, it doesn't. It would be interesting
> > to see the same comparison with 2.3.11. There were some optimizations
> > which should reduce CPU cache usage.
>
> I'm actually not entirely clear if they were using 64 bit executables
> throughout all of their tests for x86_64, in fact I suspect they weren't.
>
> If it is a 32 bit binary (and CC="gcc -m64" ./configure  might be needed
> here depending on gcc defaults), there is a painful threshold of thunking
> 32 bit calls into the 64 bit kernel.
>
> But one interesting thing about their 'stellar' performance stats on the
> x86_64 is that most apps are powered by assembler and very specific word
> size manipulations, e.g. the sound waveform or image bitmap memory
> footprint
> doesn't change, and openssl gets to employ SSE2 (post i686) manipulations.
>
> Finally, I'd expect no advantage from system caching for httpd moving
> from 2GB to 24GB of ram, which
>
>


Re: Theory on recent Phoronix benchmark?

2011-04-05 Thread Andrew Oliver
That is just the thing.  Other things that should have been similarly
affected in the benchmark were not.  Take a gander if you would at some of
the rest of that article...
On Apr 5, 2011 11:32 AM, "William A. Rowe Jr."  wrote:
> On 4/5/2011 6:27 AM, Andrew Oliver wrote:
>>
>> Anyone have any theory on why 64-bit was so much worse (suggest looking
at general article
>> for context rather than solely the except above)?
>
> Simple memory access. Intel doesn't scale to 64 bits as cleanly as, say,
> a sparcv9 64 bit binary vs sparcv8 32 bit.
>
> int's, pointers, most resources consume 2x heap and stack, except of
course
> strings.
>
> All this means you are falling out of L1, L2 cache out to memory pretty
> regularly. Pick some other applications, you should find similar results
> on most any intel program, including 32 vs 64 bit jvm performance.


Theory on recent Phoronix benchmark?

2011-04-05 Thread Andrew Oliver
http://www.phoronix.com/scan.php?page=article&item=ubuntu_natty_pae64&num=3

"When it comes to running the Apache web-server in these different
configurations, there is a 3% improvement when moving from the i686 to i686
PAE kernel and 2% on top of that when moving to the x86_64 Ubuntu. With the
newer Core i7 Sandy Bridge notebook there is a 6% boost in performance with
the PAE kernel, but the 64-bit performance strangely suffers a setback. In
this test and the rest, they are built from source for the respective
architecture."

Anyone have any theory on why 64-bit was so much worse (suggest looking at
general article for context rather than solely the except above)?

-Andy


Re: Bug 50807 - mod_proxy issue with half-closed connections

2011-02-23 Thread Andrew Oliver
...But with that kind of traffic, I kinda wonder if you shouldn't
segment if possible.  I.e. run PHP or whatever with Prefork on one set
of servers and run mod_proxy+worker on the other...

On Wed, Feb 23, 2011 at 10:56 AM, "Plüm, Rüdiger, VF-Group"
 wrote:
>
>
>> -Original Message-
>> From: Jim Jagielski [mailto:j...@jagunet.com]
>> Sent: Mittwoch, 23. Februar 2011 16:50
>> To: dev@httpd.apache.org
>> Subject: Re: Bug 50807 - mod_proxy issue with half-closed connections
>>
>>
>> On Feb 23, 2011, at 10:35 AM, Eric Covener wrote:
>>
>> > On Wed, Feb 23, 2011 at 10:12 AM, Gregory Boyce
>>  wrote:
>
>> > The manual could certainly do a better job of describing how the
>> > connection pool is used, with respect to frontend
>> connections (is this
>> > a 2.0 thing only?), child processes, exactly when smax/ttl
>> is checked,
>> > etc.
>> >
>> > Surprising that you managed to burn through all your local ports but
>> > still not managed to trigger that backend connection closure being
>> > noticed -- maybe would make sense with prefork if the pools were
>> > per-process?
>> >
>> > You could also set MaxRequestsPerChild 100k for relief if this is
>> > still a problem.
>> >
>>
>> couldn't one also use lower level tcp stack tuning to
>> address this?
>>
>
> Not sure. The problem is that we (httpd) do not call a close on the socket
> descriptor. We only do that once we want to reuse the connection and notice
> that it has been closed by the remote side.
> So I am not sure if there is any TCP parameter that times out TCP connections
> in half open state and closes them on behalf of the application.
>
>
> Regards
>
> Rüdiger
>


Re: Loadable Module debugging Apache 2.2.17 Windows.

2011-02-08 Thread Andrew Oliver
Another possibility is that one of your libraries can't be accessed by
LocalSystem (which services run as) but can as whatever user you're
logging in as.  You could try changing the service descriptor to run
Apache as that user.

On Tue, Feb 8, 2011 at 3:36 PM, Zeno Davatz  wrote:
> Dear William
>
> As usual, thank you for your valuable reply.
>
> On Tue, Feb 8, 2011 at 9:16 PM, William A. Rowe Jr.  
> wrote:
>> On 2/8/2011 8:30 AM, Zeno Davatz wrote:

 Can you start it "manually" from the console?

 http://httpd.apache.org/docs/current/invoking.html
 http://httpd.apache.org/docs/current/platform/windows.html#wincons

> mod_ruby.so could not be loaded. If I open mod_ruby.so with

 Usually it will tell you a reason why something cannot be loaded.
>>>
>>> This is my debug output of
>>>
>>> httpd -e debug
>>>
>>> https://gist.github.com/816510
>>>
>>> ruby_module seems to be wonderfully loaded when starting from the console.
>>
>> This means the ruby module or functions within msys have an intrinsic design
>> flaw that make them incompatible with running under the context of a 
>> 'service'.
>
> Hmm, interesting. So apart from that this means that Apache can
> actually load the module without a problem.
>
>> As a service, there are no stdin/out/err file handles opened, they are simply
>> NULL and may cause programs to misbehave in interesting ways.  I can't tell
>> you what specifically mod_ruby has done wrong in this respect, because I'm
>> simply too busy to review the code, but anyone familiar with windows services
>> and the C language should be able to decipher this with little hassle.
>
> Thank you again!
>
> I am willing to pay anybody who can provide me with some help here. If
> you know somebody, please let me know. Or if you know somebody, I can
> contact. Otherwise I will try to proceed a bit myself. Seems doable.
>
> Thank you for your time.
>
> Best
> Zeno
>


Re: balancer worker status

2011-02-01 Thread Andrew Oliver
IMO look at mod_jk for this.  It's interface was actually kind of
nice.  (not a fan otherwise just this aspect was nice)

On Tue, Feb 1, 2011 at 1:32 PM, William A. Rowe Jr.  wrote:
> On 2/1/2011 11:03 AM, Jim Jagielski wrote:
>> Anyone have any good ideas on the best way, GUI-wise, on how
>> to set/reset the various worker statuses on the balancer-manager
>> page? Right now we just Enable|Disable, but obviously we need
>> more fine-grained control that that. Radio buttons? Checkboxes?
>>
>> (same with actually displaying the status as well... maybe
>> use the actual ProxySet status flags, eg: E, H, etc...)?
>
> For multi-choice, I'd go droplist with the current status defaulted.
>


Re: errors that cause proxy to move worker to error state

2010-08-31 Thread Andrew Oliver
A 400 doesn't indicate a problem with the worker or the system behind
the proxy, it indicates a problem with the client.  Also marking a
worker bad for say a 401 unauthorized would make all authentication
impossible, a 403 or 404 would drain the pool every time someone asked
for something they weren't allowed to have or that wasn't located
where they thought it was (especially spiders like google).

-Andy

On Tue, Aug 31, 2010 at 9:26 AM, Jeff Trawick  wrote:
> It looks like this is just 500 and 503.
>
> Why not 400, for example?
>
>     if (access_status == OK)
>     break;
>     else if (access_status == HTTP_INTERNAL_SERVER_ERROR) {
>     /* Unrecoverable server error.
>  * We can not failover to another worker.
>  * Mark the worker as unusable if member of load balancer
>  */
>     if (balancer) {
>     worker->s->status |= PROXY_WORKER_IN_ERROR;
>     worker->s->error_time = apr_time_now();
>     }
>     break;
>     }
>     else if (access_status == HTTP_SERVICE_UNAVAILABLE) {
>     /* Recoverable server error.
>  * We can failover to another worker
>  * Mark the worker as unusable if member of load balancer
>  */
>     if (balancer) {
>     worker->s->status |= PROXY_WORKER_IN_ERROR;
>     worker->s->error_time = apr_time_now();
>     }
>     }
>     else {
>     /* Unrecoverable error.
>  * Return the origin status code to the client.
>  */
>     break;
>     }
>
>


Re: Tracking discussion for Patch 49717 - adds SSLTimeout

2010-08-18 Thread Andrew Oliver
Sorry I had it on my list of todos to follow up on this.  On testing
mod_reqtimeout accomplishes the same thing.  So this can be withdrawn
once I'm not connected through my cell phone tether or a hotel network
I'll change the ticket.  It should probably be maintained in patch
form for those who want this on 2.0x on AIX (for example ;-) ).

-Andy

On Wed, Aug 18, 2010 at 4:49 PM, Stefan Fritsch  wrote:
> On Friday 06 August 2010, Jeff Trawick wrote:
>> On Fri, Aug 6, 2010 at 12:26 PM, Andrew Oliver 
> wrote:
>> > This patch
>> > https://issues.apache.org/bugzilla/show_bug.cgi?id=49717 adds an
>> > SSLTimeout directive which is a timeout on the initial ssl
>> > handshake separate from the general Timeout.  See the issue
>> > description for a bit more detail.
>> >
>> > Why should this not be added?
>> > and/or Why should this be added?
>> > How should it be improved?
>>
>> I wonder if there is any way for mod_reqtimeout to handle this
>> capability??
>
> From the PR: "This allows for configurations where more client latency
> is allowed for established ssl connections but not for initial session
> setup."
>
> This seems to target the same problem as mod_reqtimeout's header
> timeout, which limits the total time for receiving all request headers
> (including the initial SSL handshake). Andrew, would that fit your
> needs? If no, why not?
>
> Cheers,
> Stefan
>
> PS: 2.2.x's mod_reqtimeout is buggy, use the version from trunk or
> wait for 2.2.17.
>


Re: trunk "ping" for http proxy

2010-08-16 Thread Andrew Oliver
Note that it is an option, not a default setting.  The problem with
the heartbeat bit which Red Hat/JBoss use is the unstandardized
proprietary protocol required (http://jboss.org/mod_cluster) with
separate logic to manage it.

The problem with the status url is that it doesn't accomplish the same
thing nor is it that very manageable with multiple contexts (i.e
http://server1:8080/foo http://server1:8081/bar ...).  You'd have to
make a separate request each time.

Purpose:

C = Browser
P = middle apache server
W = destination web server (probably tomcat)

C-->P--->W

C asks for http://myserver/someDatabaseBackedThing
P matches and forwards this request to W with the expect continue and
specified 5 sec expect-continue timeout.  A separate overall proxy
timeout of 1 minute is set.
W returns a continue but then begins to process the request
(CGI/JavaServlet/PHP whatever to the database)
W spends less than a minute and the request is successful

C asks for http://myserver/someDatabaseBackedThing
P matches and forwards this request to W with the expect continue and
specified 5 sec expect-continue timeout.  A separate overall proxy
timeout of 1 minute is set.
W does not return a continue or any kind of response for 5 seconds
P marks that worker dead meat

C asks for http://myserver/someDatabaseBackedThing
P matches and forwards this request to W with the expect continue and
specified 5 sec expect-continue timeout.  A separate overall proxy
timeout of 1 minute is set.
W returns a continue but then begins to process the request
(CGI/JavaServlet/PHP whatever to the database)
W spends more than a minute and the proxy request times out

The case of a dead connector things back out rather nicely and its
clearly distinguishable from a bad script.

-Andy

On Mon, Aug 16, 2010 at 2:52 PM, Jim Jagielski  wrote:
>
> On Aug 16, 2010, at 1:42 PM, Paul Querna wrote:
>
>> On Mon, Aug 16, 2010 at 8:30 AM, Jim Jagielski  wrote:
>>>
>>> On Aug 16, 2010, at 10:56 AM, Plüm, Rüdiger, VF-Group wrote:

 This basicly sums up the downsides of this approach I see as well.

 IMHO to avoid a spec violation we can only add the Expect header to
 requests with request bodies. OTOH these requests hurt most when they
 fail as we cannot sent the request body for a second time, because
 we do not have it any longer available in most situations and requests
 with request bodies are usually not idempotent.

 On the second issue we only need to take care that we do not add something
 already there and remove something that client expects to see.

 One last thing I see is that this only works if the backend is HTTP/1.1:

 8.2.3

 - If the proxy knows that the version of the next-hop server is HTTP/1.0 
 or lower
  , it MUST NOT forward the request and it MUST respond with a 417 
 (Expectation Failed)
  status.
>>>
>>> Yes, the code itself is aware of the limitations of 100-Continue,
>>> including version and req body considerations... It's not ideal,
>>> but it's better than the OPTIONS method which I played around
>>> with earlier...
>>>
>>> Still I think it's useful to add it in, have it disabled by
>>> default, and see how far we can take it...
>>
>> I think the only options really are a status URL, with a regex match,
>> so you can test if the backend has an expected content, and having the
>> backends advertise/notify the proxy that they are alive.
>>
>
> Well, OPTIONS is the defacto "HTTP ping" but it's also considered
> a "request" (afaik), and so things like keepalives, etc matter.
> That was the issue I ran into is that checking with OPTIONS
> doesn't totally remove the problem, esp for non-keepalive
> connections or one-shots, and then you need to worry if that
> last OPTIONS forced a connection that was in keepalive mode
> to close and all that junk
>
> What I just committed is not the ideal solution, but like you
> said, the real way to do this is non-trivial with the current
> architecture...
>
>


Tracking discussion for Patch 49717 - adds SSLTimeout

2010-08-06 Thread Andrew Oliver
This patch https://issues.apache.org/bugzilla/show_bug.cgi?id=49717
adds an SSLTimeout directive which is a timeout on the initial ssl
handshake separate from the general Timeout.  See the issue
description for a bit more detail.

Why should this not be added?
and/or Why should this be added?
How should it be improved?

Thanks,

Andy