Hello,

I'm testing an OpenBSD 4.2 firewall with Iperf and I'm experiencing a very
strange behaviour.
What happens is that when I reboot the backup node the connection rate drops
while the backup node is coming back.
Iperf log:
[  3] 233.0-234.0 sec  6.62 MBytes  55.5 Mbits/sec
[  3] 234.0-235.0 sec  6.62 MBytes  55.5 Mbits/sec
[  3] 235.0-236.0 sec  6.62 MBytes  55.5 Mbits/sec
[  3] 236.0-237.0 sec  6.70 MBytes  56.2 Mbits/sec
[  3] 237.0-238.0 sec    288 KBytes  2.36 Mbits/sec
[  3] 238.0-239.0 sec  3.40 MBytes  28.5 Mbits/sec
[  3] 239.0-240.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 240.0-241.0 sec  3.55 MBytes  29.8 Mbits/sec
[  3] 241.0-242.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 242.0-243.0 sec  3.49 MBytes  29.3 Mbits/sec
[  3] 243.0-244.0 sec  0.00 Bytes  0.00 bits/sec
[  3] 244.0-245.0 sec  3.49 MBytes  29.3 Mbits/sec
[  3] 245.0-246.0 sec  2.30 MBytes  19.3 Mbits/sec
[  3] 246.0-247.0 sec  5.23 MBytes  43.9 Mbits/sec
[  3] 247.0-248.0 sec  2.60 MBytes  21.8 Mbits/sec
[  3] 248.0-249.0 sec  5.37 MBytes  45.0 Mbits/sec
[  3] 249.0-250.0 sec  1.28 MBytes  10.7 Mbits/sec
[  3] 250.0-251.0 sec  4.69 MBytes  39.3 Mbits/sec
[  3] 251.0-252.0 sec  4.69 MBytes  39.3 Mbits/sec
[  3] 252.0-253.0 sec  6.62 MBytes  55.5 Mbits/sec
[  3] 253.0-254.0 sec  6.62 MBytes  55.5 Mbits/sec
[  3] 254.0-255.0 sec  6.62 MBytes  55.5 Mbits/sec

That drop in connection is when the rebooted node is coming back ! Iperf is
being tested from one machine behind one firewall interface and another
machine behind another firewall interface. One machine is running Openbsd
and the other Linux.
Is there any reason for this behaviour ? I do not expect the backup node to
have any influence over the flow on active node.

Related to this is a problem with pfsync. Sometimes I get a bad state after
the backup firewall comes back and then Iperf gets totally messed up,
sometimes recovering others not. No difference if psync is configured with
multicast or with syncpeer.
Log from the active node:
Apr 10 06:57:03 inferno /bsd: pfsync: received bulk update request
Apr 10 06:57:04 inferno /bsd: pfsync: bulk update complete
Apr 10 06:57:04 inferno pflogd[23092]: invalid size 484 (116/116), packet
dropped
Apr 10 06:57:11 inferno pflogd[23092]: invalid size 144 (116/116), packet
dropped
Apr 10 06:57:16 inferno last message repeated 3 times
Apr 10 06:57:31 inferno pflogd[23092]: invalid size 484 (116/116), packet
dropped
Apr 10 06:57:31 inferno /bsd: pf: BAD state: TCP xx.xx.xx.4:5001
xx.xx.xx.4:5001 xx.xx.xx.5:43558 [lo=2191798936 high=2191798936 win=5840
modulator=0] [lo=911995449 high=912001289 win=65535 modulator=0] 4:4 A
seq=2191798936 (2191798936) ack=911995449 len=1460 ackskew=0
pkts=1267241:671313 dir=in,fwd
Apr 10 06:57:31 inferno /bsd: pf: State failure on: 1
Apr 10 06:57:31 inferno /bsd: pf: BAD state: TCP xx.xx.xx.4:5001
xx.xx.xx.4:5001 xx.xx.xx.5:43558 [lo=2191798936 high=2191798936 win=5840
modulator=0] [lo=911995449 high=912001289 win=65535 modulator=0] 4:4 A
seq=2191800396 (2191800396) ack=911995449 len=1460 ackskew=0
pkts=1267241:671313 dir=in,fwd
Apr 10 06:57:31 inferno /bsd: pf: State failure on: 1

If I destroy pfsync interface in the master node, this problem doesn't occur
(that's what I expected to happen).

Any clue of what is happening here ?

Thanks,
John

Reply via email to