[I had a problem posting my reply to this - sorry if it's duplicated] >OK, setting threads up higher isn't usually a smart thing.
Hi Trey, Up to about 3 weeks ago I would have agreed with you, and we had our limit for Simiultaneous Running Requests (let's call it SRR) set to 20. As part of a whole load of experiments to see what made a difference I played around with the SRR figure (partly triggered by something I'd seen on the web that a low number for SRR actually was unkind to the core Java engine of cfmx and caused the whole thing to run more slowly). I tried various numbers between 20 and 150 at that time, before settling on 96. Since then I've had several people telling me that it's a bad idea, but I have to say that it's far better than 20 *for us*. First - it normally doesn't come into play. 99% of the time we run fine, with an actual max SRR of maybe 15 at the outside. Normally it's 5 - 10. Over the last 100 secs we've averaged 5.11 page hits / sec, but running requests has averaged 5.0. That's normal. At 'normal' peak loads we get 8 page hits / sec and running requests may average 7 or 8. Second - I am absolutely convinced (and have been since CF 4) that if you ever start getting requests queued then CF runs more slowly and uses up more cpu. The key for me in managing this website on a day-to-day fire fighting basis is to avoid queuing at all costs. That's why 96 beats 20 hands down. If we used to have 20 times a day when we got overloaded and cfmx started queuing (and believe me it's been worse than that) when we had SRR set to 20, since I raised it to 96 we have 2 or 3. The other 17 are still there, but the running requests grows and then declines again before we get to queuing. Third - when we get overloaded we have lots of threads sitting doing nothing for several seconds (if they were running normally we wouldn't get overloads). The main effect of increasing the SRR is in queue avoidance, not splitting the available resources 96 ways. I cannot substantiate this because it is impossible to measure, but my gut feel of day after day experience is that an SRR of 96 will recover from the same overload much quicker than an SRR of 20. Say that at it's worst we had 70 threads. With an SRR of 20 that's 20 running and 50 queued. With an SRR of 96 they are all running. I've said, and I'm serious, that when it's queuing the cpu load goes up - way out of proportion - sql responses slow down (why?) and that's why I'm convinced that it recovers much more quickly if I can avoid queuing in the first place. My hiking the number to 200 is on the same quest. If people think 200 is silly, I'll go with that, but I won't drop it below 96. Contrary to what you suggest (and what I would have agreed with until 3 weeks ago) having an SRR of 96 makes the site run better, not worse. Whatever else I try as a result of discussion on this thread I won't be dropping that back to 20 because I know from experience that I'll be able to cope with much less load. One other thing that is relevant is that of the 300k hits we take on a busy day about 100k are involved in talking to the legacy servers I've mentioned. We write a file posing a query; the legacy server renames the file, works on the query and writes it back as a new file. We look for the reply (fileexists) and do a series of java sleeps while we are waiting. So although these threads hang around for a while they take few resources while they are waiting. The average response time is maybe about 1000 ms; we sleep in 50 ms chunks. Alan ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~| Archives: http://www.houseoffusion.com/cf_lists/index.cfm?forumid=4 Subscription: http://www.houseoffusion.com/cf_lists/index.cfm?method=subscribe&forumid=4 FAQ: http://www.thenetprofits.co.uk/coldfusion/faq Your ad could be here. Monies from ads go to support these lists and provide more resources for the community. http://www.fusionauthority.com/ads.cfm Unsubscribe: http://www.houseoffusion.com/cf_lists/unsubscribe.cfm?user=89.70.4