Hi Chris
....processing 1 connection through completion
(there are 99 others still running), re-binding, accepting a single
connection into the application plus 100 others into the backlog, then
choking again and dropping 100 connections, then processing another
single connection. That's a huge waste of time unbinding and
re-binding to the port, killing the backlog over and over again... and
all for 1-connection-at-a-time pumping. Insanity.
I'm sorry but you've misunderstood what I was saying. Yes the example I used showed it for one connection to make it easier to understand what I was proposing. But in reality you would not stop and start at each connection. Remember the two thresholds I was talking about? You could stop listening at 4K connections, and start listening again when the connections drops to say 3K - and these could be user specified parameters based on the deployment.

HTTP keep-alive from a load balancer in front would work extremely well under these conditions as established TCP connections are re-used. Any production grade load balancer could immediately fail-over only the failing requests to another Tomcat when one is under too much load - and this would work for even non-idempotent services.
You want to add all this extra complexity to the code and, IMO, shitty
handling of your incoming connections just so you can say "well,
you're getting 'connection refused' instead of hanging... isn't that
better?". I assert that it is *not* better. Clients can set TCP
handshake timeouts and survive. Your server will perform much better
without all this foolishness.
If you can, try to understand what I said better.. Its ok to not accept this proposal and/or not understand it..

regards
asankha

--
Asankha C. Perera
AdroitLogic, http://adroitlogic.org

http://esbmagic.blogspot.com




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org

Reply via email to