On 07.06.2010 20:22, Mangold, Daniel wrote: > Hello to all, > > I have a problem with mod_jk (hope this is the right place form my problem). > > Used servers and versions: > - Apache 2.2.15 (Win32) > - mod_jk/1.2.30 > - Apache Tomcat/6.0.20 using AJP/1.3 > - jdk1.5.0_12 > > Problem description: > I enter the appropriate URL pointing to the balancing web server into the > internet explorer address bar (IE on a host different to the machine where > web server and tomcats are installed), press enter and get a '503 - service > unavailable message' back. > I have 2 Tomcat instances, both up and running and http accessible. > When (with the same internet explorer window) I first enter the URL of one > Tomcat instance directly, get the requested page back, then again try the URL > using the web server, it suddenly works. This seems not to be due to caching, > because I do not see the failure message in mod_jk.log anymore and I get log > information which indicates that everything went fine. > > When I try to access the web server URL locally from the machine where all > servers are installed, it works from the beginning. > I tried several configurations and don't know what else to try. > The mod_jk status page shows that the tomcat instances were found and that > there is no error. > > mod_jk.log shows those messages when I enter the web servers URL: > (I attached 2 full mod_jk.conf to this email with different configs but same > result). > > [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_uri_worker_map.c (1036): > Attempting to map URI '/W********h/' from 4 maps [Mon Jun 07 18:29:29 > 2010][1944:408] [debug] jk_uri_worker_map.c (850): Attempting to map context > URI '/W********h/*=balancer' source 'JkMount' > [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_uri_worker_map.c (863): Found > a wildchar match '/W********h/*=balancer' > [Mon Jun 07 18:29:29 2010][1944:408] [debug] mod_jk.c (2462): Into handler > jakarta-servlet worker=balancer r->proxyreq=0 [Mon Jun 07 18:29:29 > 2010][1944:408] [debug] jk_worker.c (116): found a worker balancer [Mon Jun > 07 18:29:29 2010][1944:408] [debug] jk_worker.c (339): Maintaining worker > balancer [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_ajp_common.c (3197): > reached pool min size 32 from 64 cache slots [Mon Jun 07 18:29:29 > 2010][1944:408] [debug] jk_ajp_common.c (3197): reached pool min size 32 from > 64 cache slots [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_worker.c > (293): Found worker type 'lb' > [Mon Jun 07 18:29:29 2010][1944:408] [debug] mod_jk.c (978): Service > protocol=HTTP/1.0 method=GET ssl=false host=(null) addr=**.*.*.130 > name=********* port=8080 auth=(null) user=(null) laddr=**.*.*.21 > raddr=**.*.*.130 uri=/Workbench/ [Mon Jun 07 18:29:29 2010][1944:408] [debug] > jk_lb_worker.c (1118): service sticky_session=1 > id='933BF867682BC5657E3F27E5D17917D7' > [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_lb_worker.c (946): searching > worker for partial sessionid 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 > 18:29:29 2010][1944:408] [info] jk_lb_worker.c (985): all workers are in > error state for session 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 > 2010][1944:408] [info] jk_lb_worker.c (1448): All tomcat instances failed, no > more workers left for recovery (attempt=0, retry=0) [Mon Jun 07 18:29:29 > 2010][1944:408] [debug] jk_lb_worker.c (946): searching worker for partial > sessionid 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 > 2010][1944:408] [info] jk_lb_worker.c (985): all workers are in error state > for session 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 > 2010][1944:408] [info] jk_lb_worker.c (1457): All tomcat instances failed, no > more workers left (attempt=1, retry=0) [Mon Jun 07 18:29:29 2010][1944:408] > [debug] jk_lb_worker.c (1131): retry 1, sleeping for 100 ms before retrying > [Mon Jun 07 18:29:29 20 10][1944:408] [debug] jk_lb_worker.c (946): searching worker for partial sessionid 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 2010][1944:408] [info] jk_lb_worker.c (985): all workers are in error state for session 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 2010][1944:408] [info] jk_lb_worker.c (1457): All tomcat instances failed, no more workers left (attempt=0, retry=1) [Mon Jun 07 18:29:29 2010][1944:408] [debug] jk_lb_worker.c (946): searching worker for partial sessionid 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 2010][1944:408] [info] jk_lb_worker.c (985): all workers are in error state for session 933BF867682BC5657E3F27E5D17917D7 [Mon Jun 07 18:29:29 2010][1944:408] [info] jk_lb_worker.c (1457): All tomcat instances failed, no more workers left (attempt=1, retry=1) [Mon Jun 07 18:29:29 2010][1944:408] [info] jk_lb_worker.c (1468): All tomcat instances are busy or in error state [Mon Jun 07 18:29:29 2010][1944:408] [error] jk_lb_worker.c (1473): All tomcat instances failed, no more workers left [Mon Jun 07 18:29:29 2010]balancer ********* 0.109377 [Mon Jun 07 18:29:29 2010][1944:408] [info] mod_jk.c (2618): Service error=0 for worker=balancer
The attachment did not go through and the lines included in the mail are wrapped in a way that it is hard to read them. Shortly rading through the lines indicates, that this is not the full log file? Your configuration also seems to be copied from some very old example configurations. Do yourself a favor, grab a source distribution of mod_jk from http://tomcat.apache.org/download-connectors.cgi and have a look at the contained example configuration. Regards, Rainer First: sorry, it looks like at least half of my previous mail was truncated for whatever reason. The attachment did not go through as well. And it's true, the pasted log file above is not complete. However, this is now my current configuration of workers.properties which seems to work. worker.list=balancer,status # DEFAULT CONFIG FOR WORKERS worker.default.host=localhost worker.default.type=ajp13 worker.default.socket_connect_timeout=5000 worker.default.socket_keepalive=true worker.default.connection_pool_minsize=16 worker.default.connection_pool_size=1024 worker.default.connection_pool_timeout=3000 worker.default.reply_timeout=300000 # disable retries, whenever a part of the request was successfully send to the backend worker.template.recovery_options=3 # Define Node1 worker.worker1.reference=worker.default worker.worker1.port=8033 # Define Node2 worker.worker2.reference=worker.default worker.worker2.port=8044 # Load balancing behaviour worker.balancer.type=lb worker.balancer.balance_workers=worker1,worker2 # Load balancing method can be [R]equest, [S]ession, [T]raffic, or [B]usyness worker.balancer.method=S worker.balancer.sticky_session=true #worker.balancer.sticky_session_force=true worker.balancer.max_reply_timeouts=10 # Status worker for managing load balancer worker.status.type=status Well...after trying different things, it seems that the problem was the force mode for sticky sessions. The Tomcat webapp requires sticky sessions for load balancing, otherwise it won't work. So this works fine now: worker.balancer.sticky_session=true #worker.balancer.sticky_session_force=true When uncommenting the sticky_session_force, I always get the '503 service temporarily unavailable' message after the second click. If I read the log messages right, the reason I that mod_jk could not establish establish the connection to any of the Tomcat instances. For a while I was desperate enough to try load balancing with isapi_redirect-1.2.30 on IIS instead of Apache web server. It behaves in the same way when I use the sticky_session_force property (service unavailable page). On the other hand, when commenting the sticky_session_force there, I had another problem. My guessing is that with IIS and isapi-redirect, the sticky_session property did not work at all. But maybe I misconfigured IIS...I'm not really familiar with it. Are there any known issues with sticky_session on Apache Webserver or IIS? Regards, Daniel --------------------------------------------------------------------- To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org For additional commands, e-mail: users-h...@tomcat.apache.org