I get some strange results concerning scp-transferrates depending on the hosts that are involved. Testfile is a zip-compressed file of 16 MB. All hosts run OpenBSD with MP-kernel and 4 CPUs, but with different versions between 4.8 and 5.2. All interfaces run with disabled inet6. ping with large packets (10k) runs fine between all interfaces (avg < 10 ms, packet loss << 1%). Machines are VM (kvm and Virtualbox).
There are 3 sites with 2 hosts per site being locally connected by switches between 100 Mbps and 1 Gbps, all of them with public internet access between 20 Mbps and 1 Gbps (B = Byte, b = bit): site H: hostH1 and hostH2 with 1 Gbps (connect + internet access) site M: hostM1 and hostM2 with 100 Mbps connect and 20 Mbps internet access site D: hostD1 and hostD2 with 1 Gbps (connect + internet access) scp runs with expected speed ( >> 1 MBps) between all hosts with one exception: copying between hosts of site D to another site usually start with less than 1 MBps and then slow down to an overall average of 100..200 kBps. (but scp between hostD1 and hostD2 run fast.) After establishing an ipsec-tunnel between hostD2 and hostM2 scp between private IP-addresses of these two hosts still run slow. But: a scp between an host (Linux) that takes hostD2 as VPN-router and an host that takes hostM2 as VPN-router nearly runs with expected speed of 2 MBps. BTW: some of the hosts have a larger pf.conf and use carp, but hostD2 and hostM2 don't use carp and just use OpenBSD's default-pf.conf: set skip on lo pass block in on ! lo0 proto tcp to port 6000:6010 scp between hostM2 and other hosts (1 x OpenBSD-4.3, 1 x Linux) show normal transfer rates. Any idea what could make scp so slow under certain circumstances or what to test to get some more information? Maybe I should mention that hosts at site D also had some performance problems with openvpn-tunnels but as I want to move to ipsec I don't want focus on this. -- TIA, Tobias.