----- Original Message ----- From: "Outback Dingo" <outbackdi...@gmail.com>
To: "Lawrence Stewart" <lstew...@freebsd.org>
Cc: <n...@freebsd.org>
Sent: Thursday, July 04, 2013 12:06 AM
Subject: Re: Terrible ix performance


On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart <lstew...@freebsd.org>wrote:

On 07/03/13 22:58, Outback Dingo wrote:
> On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart <lstew...@freebsd.org
> <mailto:lstew...@freebsd.org>> wrote:
>
>     On 07/03/13 14:28, Outback Dingo wrote:
>     > Ive got a high end storage server here, iperf shows decent network
io
>     >
>     > iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
>     > ------------------------------------------------------------
>     > Client connecting to 10.0.96.1, TCP port 5001
>     > TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
>     > ------------------------------------------------------------
>     > [  3] local 10.0.96.2 port 34753 connected with 10.0.96.1 port 5001
>     > [ ID] Interval       Transfer     Bandwidth
>     > [  3]  0.0-10.0 sec  9.78 GBytes  8.40 Gbits/sec
>     > [  3] 10.0-20.0 sec  8.95 GBytes  7.69 Gbits/sec
>     > [  3]  0.0-20.0 sec  18.7 GBytes  8.05 Gbits/sec
>
>     Given that iperf exercises the ixgbe driver (ix), network path and
TCP,
>     I would suggest that your subject is rather misleading ;)
>
>     > the card has a 3 meter twinax cable from cisco connected to it,
going
>     > through a fujitsu switch. We have tweaked various networking, and
>     kernel
>     > sysctls, however from a sftp and nfs session i cant get better
>     then 100MBs
>     > from a zpool with 8 mirrored vdevs. We also have an identical box
>     that will
>     > get 1.4Gbs with a 1 meter cisco twinax cables that writes 2.4Gbs
>     compared
>     > to reads only 1.4Gbs...
>
>     I take it the RTT between both hosts is very low i.e. sub 1ms?

An answer to the above question would be useful.

>     > does anyone have an idea of what the bottle neck could be?? This
is a
>     > shared storage array with dual LSI controllers connected to 32
>     drives via
>     > an enclosure, local dd and other tests show the zpool performs
>     quite well.
>     > however as soon as we introduce any type of protocol, sftp, samba,
nfs
>     > performance plummets. Im quite puzzled and have run out of ideas.
>     so now
>     > curiousity has me........ its loading the ix driver and working
>     but not up
>     > to speed,
>
>     ssh (and sftp by extension) aren't often tuned for high speed
operation.
>     Are you running with the HPN patch applied or a new enough FreeBSD
that
>     has the patch included? Samba and NFS are both likely to need tuning
for
>     multi-Gbps operation.
>
>
> Running 9-STABLE as of 3 days ago, what are you referring to s i can
> validate i dont need to apply it

Ok so your SSH should have the HPN patch.

> as for tuning for NFS/SAMBA sambas configured with AIO, and sendfile,
> and there so much information
> on tuninig these things that its a bit hard to decipher whats right and
> not right

Before looking at tuning, I'd suggest testing with a protocol that
involves the disk but isn't as heavy weight as SSH/NFS/CIFS. FTP is the
obvious choice. Set up an inetd-based FTP instance, serve a file large
enough that it will take ~60s to transfer to the client and report back
what data rates you get from 5 back-to-back transfer trials.


on the 1GB interface i get 100MB/s, on the 10GB interface i get 250MB/s
via NFS
on the 1GB Interface 1 get 112MB/s, on the 10GB interface i get

ftp> put TEST3
53829697536 bytes sent in 01:56 (439.28 MiB/s)
ftp> get TEST3
53829697536 bytes received in 01:21 (632.18 MiB/s)
ftp> get TEST3
53829697536 bytes received in 01:37 (525.37 MiB/s)
ftp> put TEST3
43474223104 bytes sent in 01:50 (376.35 MiB/s)
ftp> put TEST3
local: TEST3 remote: TEST3
229 Entering Extended Passive Mode (|||10613|)
226 Transfer complete
43474223104 bytes sent in 01:41 (410.09 MiB/s)
ftp>

so still about 50% performance on 10GB

Out of interest have you tried limiting the number of queues?

If not give it a try see if it helps, add the following to
/boot/loader.conf:
hw.ixgbe.num_queues=1

If nothing else will give you another data point.

   Regards
   Steve

================================================
This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it.
In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to