> Date: Sun, 12 Jun 2011 03:02:23 -0700
> From: da...@lang.hm
> To: bodycar...@live.com
> CC: squ...@treenet.co.nz; squid-users@squid-cache.org
> Subject: RE: [squid-users] squid 3.2.0.5 smp scaling issues
> 
> On Sun, 12 Jun 2011, Jenny Lee wrote:
> 
> >> On 12/06/11 18:46, Jenny Lee wrote:
> >>>
> >>> On Sat, Jun 11, 2011 at 9:40 PM, Jenny Lee wrote:
> >>>
> >>> I like to know how you are able to do>13000 requests/sec.
> >>> tcp_fin_timeout is 60 seconds default on all *NIXes and available 
> >>> ephemeral port range is 64K.
> >>> I can't do more than 1K requests/sec even with 
> >>> tcp_tw_reuse/tcp_tw_recycle with ab. I get commBind errors due to 
> >>> connections in TIME_WAIT.
> >>> Any tuning options suggested for RHEL6 x64?
> >>> Jenny
> >>>
> >>> I would have a concern using both those at the same time. reuse and 
> >>> recycle. Reuse a socket, but recycle it, I've seen issues when testing my 
> >>> own linux distro's with both of these settings. Right or wrong that was 
> >>> my experience.
> >>> fin_timeout, if you have a good connection, there should be no reason 
> >>> that a system takes 60 seconds to send out a fin. Cut that in half, if 
> >>> not by 2/3's
> >>> And what is your limitation at 1K requests/sec, load (if so look at I/O) 
> >>> Network saturation? Maybe I missed an earlier thread and I too would tilt 
> >>> my head at 13K requests sec!
> >>> Tory
> >>> ---
> >>>
> >>>
> >>> As I mentioned, my limitation is the ephemeral ports tied up with 
> >>> TIME_WAIT. TIME_WAIT issue is a known factor when you are doing testing.
> >>>
> >>> When you are tuning, you apply options one at a time. tw_reuse/tc_recycle 
> >>> were not used togeter and I had 10 sec fin_timeout which made no 
> >>> difference.
> >>>
> >>> Jenny
> >>>
> >>>
> >>> nb: i still dont know how to do indenting/quoting with this hotmail... 
> >>> after 10 years.
> >>>
> >>
> >> Couple of thing to note.
> >> Firstly that this was an ab (apache bench) reported figure. It
> >> calculates the software limitation based on speed of transactions done.
> >> Not necessarily accounting for things like TIME_WAIT. Particularly if it
> >> was extrapolated from say, 50K requests, which would not hit that OS limit.
> >
> > Ab accounts for 200-OK responses and TIME_WAITS cause squid to issue 500. 
> > Of course if you send in 50K it would not be subject to this but I usually 
> > send couple 10+ million to simulate load at least for a while.
> >
> >
> >> He also mentioned using a "local IP address". If that was on the lo
> >> interface. It would not be subject to things like TIME_WAIT or RTT lag.
> >
> > When I was running my benches on loopback, I had tons of TIME_WAITS for 
> > 127.0.0.1 and squid would bail out with: "commBind: Cannot bind socket..."
> >
> > Of course, I might be doing things wrong.
> >
> > I am interested in what to optimize on RHEL6 OS level to achieve higher 
> > requests per second.
> >
> > Jenny
> 
> I'll post my configs when I get back to the office, but one thing is that 
> if you send requests faster than they can be serviced the pending requests 
> build up until you start getting timeouts. so I have to tinker with the 
> number of requests that can be sent in parallel to keep the request rate 
> below this point.
> 
> note that when I removed the long list of ACLs I was able to get this 13K 
> requests/sec rate going from machine A to squid on machine B to apache on 
> machine C so it's not a localhost thing.
> 
> getting up to the 13K rate on apache does require doing some tuning and 
> tweaking of apache, stock configs that include dozens of dynamically 
> loaded modules just can't achieve these speeds. These are also fairly 
> beefy boxes, dual quad core opterons with 64G ram and 1G ethernet 
> (multiple cards, but I haven't tried trunking them yet)
> 
> David Lang


Ok, I am assuming that persistent-connections are on. This doesn't simulate any 
real life scenario.

I would like to know if anyone can do more than 500 reqs/sec with persistent 
connections off.

Jenny                                     

Reply via email to