Hi Peter, I have seen you have done a lot of testing with apache benchmarking. I find it a little strange to use this as a test. Basically you will hit the roof of standing I/O operations because you introduce latency with pfsense. The lower the latency the more finished tasks/connections per time unit. Most people don't take this into consideration when they tune apache. Although, this is one of the most important aspects of web-server tuning.
This is the scenario: Client with low BW and high latency will generate a standing I/O because of the way apache is designed. So if a client with 100ms latency asks for a file of 100Kbyte and he has a 3KB/s transfer rate he will generate a standing I/O operation for "latency + transfer time", and the I/O operation will not be finished until he has a completed transfer. So basically you do the same, because you change the amount of time the request takes to process you will have more standing I/O operations then if pfsense does routing only (faster then routing and filtering). So lets say that you increase latency from 0.4 ms to 2 ms it will mean that you have standing I/O 250% longer. So in turn that will mean that your ability to serve connections will be 1/5 with 2ms compared to 0.4 ms latency. I hope this explains better the behavior you see. As for device polling there are some sysctls that controls polling behavior sysctl kern.polling. will list them. The ones listed below seems to be the once that has the most effect on polling and performance. You will have to play around with these settings to find out what works best on your HW, as I can't seem to find some common setting that works well for all kinds of HW. kern.polling.each_burst=80 kern.polling.burst_max=1000 kern.polling.user_frac=50 The info/documentation on these settings seems limited so you should do some creative google searching to find out more. -lsf -----Original Message----- From: Peter Zaitsev [mailto:[EMAIL PROTECTED] Sent: 30. oktober 2005 05:35 To: support@pfsense.com Subject: [pfSense Support] Network Device pooling Hi, Tested this feature to see if it helps me with apache benchmark problem - no it does not . Also it looks like it is firewall related issue as if firewall is totally disabled (pf fails to load rules) everything works as expected. Speaking about Network pooling - in my case it increased packet round trip (2 Gbit Nicks) from 0.4 ms to 2ms. At the same time it well decreased CPU usage during the tests so this is something to consider if CPU performance ever becomes the problem. On other hand I was a bit surprised - according to vmstat number of interrupts even on idle box jumped to some 30.000/sec (from some 150 without this option set) I guess these are timer interrupts are used for pooling, so why they are pooled about 1000 times per second if we get so many timer interrupts ? One more thing to note: system needs to be restarted for this option to take an affect, however it does not say so anywhere. --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED] --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]