On Tue, Oct 2, 2012 at 9:59 AM, Christiano F. Haesbaert
wrote:
> Why not using tcpbench where you can actually specify the parameters
> and know what is going on :).
>
> Play with buffer sizes and you'll see a big difference, using -u will
> give you the actual PPS.
>
I agree, I stopped using Ipe
On Tue, Oct 02, 2012 at 09:59:05AM +0200, Christiano F. Haesbaert wrote:
> Why not using tcpbench where you can actually specify the parameters
> and know what is going on :).
>
> Play with buffer sizes and you'll see a big difference, using -u will
> give you the actual PPS.
I agree with this.
On 2 October 2012 08:57, David Coppa wrote:
> On Mon, Oct 1, 2012 at 5:55 PM, Russell Garrison
> wrote:
>> Is iPerf running threaded? What about dd to null and a loopback listener?
>
> Beware: only -current (since Tue Sep 25) net/iperf port has threading enabled.
>
> ciao,
> David
>
Why not usin
On Mon, Oct 1, 2012 at 5:55 PM, Russell Garrison
wrote:
> Is iPerf running threaded? What about dd to null and a loopback listener?
Beware: only -current (since Tue Sep 25) net/iperf port has threading enabled.
ciao,
David
Thus said Jim Miller on Mon, 01 Oct 2012 11:20:06 EDT:
> # dd if=/dev/zero bs=1000 count=100 | nc -v 172.16.2.2 12345
What if you try a different bs?
$ dd if=/dev/zero bs=1000 count=100 > /dev/null
100+0 records in
100+0 records out
10 bytes transferred in 1.102 secs (907
Perhaps the pipe size causes degradations, I seem to recall getting better
results on benchmarks without pipes.
Den 1 okt 2012 18:07 skrev "Otto Moerbeek" :
> On Mon, Oct 01, 2012 at 11:20:06AM -0400, Jim Miller wrote:
>
> > I just reran the test again. I still receive about 600Mbps using iPerf
>
On Mon, Oct 01, 2012 at 11:20:06AM -0400, Jim Miller wrote:
> I just reran the test again. I still receive about 600Mbps using iPerf
> however using
>
> client
> # dd if=/dev/zero bs=1000 count=100 | nc -v 172.16.2.2 12345
>
> server
> # nc -v -l 12345 > /dev/null
>
> I get numbers around
I just reran the test again. I still receive about 600Mbps using iPerf
however using
client
# dd if=/dev/zero bs=1000 count=100 | nc -v 172.16.2.2 12345
server
# nc -v -l 12345 > /dev/null
I get numbers around 350Mbps. I tend to think iPerf is more reliable in
this situation.
Any ideas wh
600Mbps seems about right, I tested a pair of E5649-based boxes to
550Mbps last year (with aes-128-gcm):
http://marc.info/?l=openbsd-misc&m=134033767126930
You'll probably get slightly more than 600 with with multiple TCP
streams.
Assuming PF was enabled for your test (the default configuration
Yes. Let me double check everything again on Monday. Keep in mind that
all devices had 1Gb ethernet interfaces and everything was directly
cabled. No pf rules either. w/o ipsec I could get 900mbps through the
openbsd boxes.
Now you've got me thinking I need to recheck everything.
-Jim
On 9/2
Hi,
On 28.9.2012 22:09, Jim Miller wrote:
> So using another Mac w/ 1Gb ethernet adapter to a Linux box w/ 1Gb eth I
> was able to achieve approx. 600Mbps performance through the test setup
> (via iperf and my dd method).
>
600Mbps via ipsec between two Intel E31220 ?
So I just realized another serious flaw in my testing. I was using a
Mac Air w/ USB 100Mb ethernet adapter for one of the hosts behind the
OpenBSD VPN devices. And it must have been limiting the speed more than
I thought.
So using another Mac w/ 1Gb ethernet adapter to a Linux box w/ 1Gb eth I
w
Jim Miller wrote:
> The test I'm using is this
> Host A:
> # nc -v -l 12345 | /dev/null
>
> Host B:
> # dd if=/dev/zero bs=1000 count=1 | nc -v 12345
I increased the count a bit:
10 bytes transferred in 53.265 secs (18773882 bytes/sec)
That's with AES-256-GCM between two Sandy Bri
Good catch. I've since upgraded to the amd64 kernel. See the below dmesg.
The performance jumped from 40mbps to approx. 70mbps. This is obviously
a significant jump. I've tried switching the childsa from aes-256-gmac,
aes-256-gcm, aes-128 and the times are fairly constant. I assume the
AES-NI
On Fri, Sep 28, 2012 at 08:38:37AM -0400, Jim Miller wrote:
> Sorry I was stingy on the dmesg output. Here's the full dump. I will
> test with other AES modes now.
And then install amd64 ;-)
-Otto
>
> -Jim
>
>
> OpenBSD 5.1 (GENERIC.MP) #188: Sun Feb 12 09:55:11 MST 2012
>
Sorry I was stingy on the dmesg output. Here's the full dump. I will
test with other AES modes now.
-Jim
OpenBSD 5.1 (GENERIC.MP) #188: Sun Feb 12 09:55:11 MST 2012
dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC.MP
cpu0: Intel(R) Xeon(R) CPU E31220 @ 3.10GHz ("GenuineIntel
On 2012 Sep 27 (Thu) at 17:30:38 -0400 (-0400), Jim Miller wrote:
:Hardware Configuration:
:- (2) identical SuperMicro systems with quad core E31220 w/ AES-NI enabled
:
:cpu0: Intel(R) Xeon(R) CPU E31220 @ 3.10GHz ("GenuineIntel" 686-class)
:3.10 GHz
:cpu0:
:FPU,V86,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,
On Fri, Sep 28, 2012 at 11:45 AM, Otto Moerbeek wrote:
> On Thu, Sep 27, 2012 at 05:30:38PM -0400, Jim Miller wrote:
>
>> Hi,
>>
>> I'm trying to determine if the performance I'm seeing between two
>> OpenBSD 5.1 IPSEC VPN endpoints is typical (or expected). I recognize
>> there are quite a few v
On Thu, Sep 27, 2012 at 11:30 PM, Jim Miller wrote:
> Hi,
>
> I'm trying to determine if the performance I'm seeing between two
> OpenBSD 5.1 IPSEC VPN endpoints is typical (or expected). I recognize
> there are quite a few variables to consider and I'm sure I've not
> toggled each one but I coul
On Thu, Sep 27, 2012 at 05:30:38PM -0400, Jim Miller wrote:
> Hi,
>
> I'm trying to determine if the performance I'm seeing between two
> OpenBSD 5.1 IPSEC VPN endpoints is typical (or expected). I recognize
> there are quite a few variables to consider and I'm sure I've not
> toggled each one b
Hi,
I'm trying to determine if the performance I'm seeing between two
OpenBSD 5.1 IPSEC VPN endpoints is typical (or expected). I recognize
there are quite a few variables to consider and I'm sure I've not
toggled each one but I could use a sanity check regardless.
Question:
With the configurati
21 matches
Mail list logo