Hi Andy,
This morning I have added Priority Queueing (PRIQ) to the ruleset and
prefer TCP ACK packets over everything else. I can see the queues with
systat queue but the change has no effect on the user experience nor the
throughput.
I have read something about adjust TCP send and receive window sizes
settings, but OpenBSD to this automatically since 2010 [1]. What else can
I set?
Best Regards,
Patrick
[1] http://marc.info/?l=openbsd-misc&m=128905075911814
On Thu, 2 Oct 2014, jum...@yahoo.de wrote:
Hi Andy,
Setup some queues and prioritise your ACK's ;)
Good idea, I will try to implement a Priority Queueing with the old altq.
Best Regards,
Patrick
On Thu, 2 Oct 2014, Andy wrote:
Setup some queues and prioritise your ACK's ;)
The box is fine under the load I'm sure, but you'll still need to
prioritise those TCP acknowledgments to make things snappy when lots of
traffic is going on..
On 02/10/14 17:13, Ville Valkonen wrote:
Hello Patrick,
On 2 October 2014 17:32, Patrick <jum...@yahoo.de> wrote:
Hi,
I use a OpenBSD based firewall (version 5.2, I know I should upgrade but
...) between a 8 host cluster of Linux server and 300 clients which will
access this clutser via VNC. Each server is connected with one gigabit
port to a dedicated switch and the firewall has on each site one gigabit
(dedicated switch and campus network).
The users complains about slow VNC response times (if I connect a client
system to the dedicated switch, the access is faster, even during peak
hours), and the admins of the cluster blame my firewall :(.
I use MRTG for traffic monitoring (data retrieves from OpenBSD in one
minute interval) and can see average traffic of 160 Mbit/s during office
hours and peaks and 280 Mbit/s. With bwm-ng and a five second interval I
can see peaks and 580 Mbit/s. The peak packets per second is arround
80000 packets (also measured with bwm-ng). The interrupt of CPU0 is in
peak 25%. So with this data I don't think the firewall is at the limit,
I'm right?
The server is a standard Intel Xeon (E3-1220V2, 4 Cores, 3.10 GHz) with 4
GByte of memory and 4 1 Gbit/s ethernet cooper Intel nics (driver em).
Where is the problem? Can't the nics handle more packets/second? How can
I check for this?
If I connect a client system directly to the dedicated system, the
response times are better.
Thanks for your help,
Patrick
In addition to dmesg, could you please provide the following information:
$ pfctl -si
$ sysctl kern.netlivelocks
and interrupt statistics (by systat for example) would be helpful.
Thanks!
--
Regards,
Ville