Hi, all:
    I configure a Apache with mod_jk as HTTP load balancer for a JBoss
cluster. My cluster provide some web services. Of course, the URL in the web
serice call by client is the URL of the load balancer's. If for some reason,
the web service on a node is unavailable, client call on it will get a 500
error response which should be handled by mod_jk using fail_on_status
directive. I tried it but found that the fail over seems not work. For
example, I have two nodes in a cluster, web serivce on node 1 is available
but web service on node 2 is not available. I send 10 call through load
balancer, but found that 9 calls succeeded, but one still get exception. It
seems that after the first call is dispatched to node 2, node 2 will return
500 error and inform load balancer to mark it as in an error state. Then the
next calls will not be dispatched to node 2 again. It is good. But the first
call itself does not fail over to node 1. So only 9 call succeed.
   Do I forget some other configuration to make the first call fail over? My
workers.properties is like below. Any suggestion from you will be very
appreciated!

# Define list of workers that will be used
# for mapping requests
worker.list=loadbalancer,status

# Define Node1
# modify the host as your host IP or DNS name.
worker.node1.port=8009
worker.node1.host=10.111.3.59
worker.node1.type=ajp13
worker.node1.lbfactor=1
worker.node1.cachesize=10
worker.node1.fail_on_status=500

# Define Node2
# modify the host as your host IP or DNS name.
worker.node2.port=8009
worker.node2.host=10.111.3.50
worker.node2.type=ajp13
worker.node2.lbfactor=1
worker.node2.cachesize=10
worker.node2.fail_on_status=500

# Load-balancing behaviour
worker.loadbalancer.type=lb
worker.loadbalancer.balance_workers=node1,node2
worker.loadbalancer.sticky_session=1

# Status worker for managing load balancer
worker.status.type=status

Thanks!
Zeke

Reply via email to