- Original Message -
On Wed, Jul 3, 2013 at 10:01 PM, Lawrence Stewart
On 07/04/13 10:18, Kevin Oberman wrote:
On Wed, Jul 3, 2013 at 4:21 PM, Steven Hartland
Out of interest have you tried limiting the number of queues?
If not give it a try see if it
ix is just the device name, it is using the ixgbe driver. The driver should
print some kind of banner when it loads, what version of the OS and driver
are you using?? I have little experience testing nfs or samba so I am
not sure right off what might be the problem.
Jack
On Tue, Jul 2, 2013 at
On Jul 2, 2013, at 10:28 PM, Outback Dingo outbackdi...@gmail.com wrote:
Ive got a high end storage server here, iperf shows decent network io
the card has a 3 meter twinax cable from cisco connected to it, going
through a fujitsu switch. We have tweaked various networking, and kernel
On 07/03/13 14:28, Outback Dingo wrote:
Ive got a high end storage server here, iperf shows decent network io
iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
Client connecting to 10.0.96.1, TCP port 5001
TCP window size: 2.50 MByte
On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart lstew...@freebsd.orgwrote:
On 07/03/13 14:28, Outback Dingo wrote:
Ive got a high end storage server here, iperf shows decent network io
iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
On Wed, Jul 3, 2013 at 2:00 AM, Jack Vogel jfvo...@gmail.com wrote:
ix is just the device name, it is using the ixgbe driver. The driver should
print some kind of banner when it loads, what version of the OS and driver
are you using?? I have little experience testing nfs or samba so I am
not
On 07/03/13 22:58, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart lstew...@freebsd.org
mailto:lstew...@freebsd.org wrote:
On 07/03/13 14:28, Outback Dingo wrote:
Ive got a high end storage server here, iperf shows decent network io
iperf -i 10 -t
On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart lstew...@freebsd.orgwrote:
On 07/03/13 22:58, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart lstew...@freebsd.org
mailto:lstew...@freebsd.org wrote:
On 07/03/13 14:28, Outback Dingo wrote:
Ive got a high
- Original Message -
From: Outback Dingo outbackdi...@gmail.com
To: Lawrence Stewart lstew...@freebsd.org
Cc: n...@freebsd.org
Sent: Thursday, July 04, 2013 12:06 AM
Subject: Re: Terrible ix performance
On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart lstew...@freebsd.orgwrote
On Wed, Jul 3, 2013 at 4:21 PM, Steven Hartland kill...@multiplay.co.ukwrote:
- Original Message - From: Outback Dingo outbackdi...@gmail.com
To: Lawrence Stewart lstew...@freebsd.org
Cc: n...@freebsd.org
Sent: Thursday, July 04, 2013 12:06 AM
Subject: Re: Terrible ix performance
:06 AM
Subject: Re: Terrible ix performance
On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart lstew...@freebsd.org
wrote:
On 07/03/13 22:58, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 4:50 AM, Lawrence Stewart lstew...@freebsd.org
mailto:lstew...@freebsd.org wrote:
On 07/03/13 14:28
Stewart lstew...@freebsd.org
Cc: n...@freebsd.org
Sent: Thursday, July 04, 2013 12:06 AM
Subject: Re: Terrible ix performance
On Wed, Jul 3, 2013 at 9:39 AM, Lawrence Stewart lstew...@freebsd.org
wrote:
On 07/03/13 22:58, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 4:50 AM
On 07/04/13 13:06, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 10:01 PM, Lawrence Stewart lstew...@freebsd.org
mailto:lstew...@freebsd.org wrote:
On 07/04/13 10:18, Kevin Oberman wrote:
On Wed, Jul 3, 2013 at 4:21 PM, Steven Hartland
kill...@multiplay.co.uk
On Wed, Jul 3, 2013 at 11:41 PM, Lawrence Stewart lstew...@freebsd.orgwrote:
On 07/04/13 13:06, Outback Dingo wrote:
On Wed, Jul 3, 2013 at 10:01 PM, Lawrence Stewart lstew...@freebsd.org
mailto:lstew...@freebsd.org wrote:
On 07/04/13 10:18, Kevin Oberman wrote:
On Wed, Jul 3,
.. please file a bug if hz is affecting your performance. Ew.
-adrian
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org
On Wed, Jul 3, 2013 at 8:41 PM, Lawrence Stewart lstew...@freebsd.org wrote:
- I recall some advice that zpool's should not have more than about 8 or
10 disks in them, and you should instead create multiple zpools if you
have more disks. Perhaps investigate the source of that rumour and if
In article cage5ycpojnenzw+6sn9wyee5ruzpuicke8db8r0zgrjgbj2...@mail.gmail.com,
Peter Wemm quotes some advice about ZFS filesystem vdev layout:
1. Virtual Devices Determine IOPS
IOPS (I/O per second) are mostly a factor of the number of virtual
devices (vdevs) in a zpool. They are not a factor of
Ive got a high end storage server here, iperf shows decent network io
iperf -i 10 -t 20 -c 10.0.96.1 -w 2.5M -l 2.5M
Client connecting to 10.0.96.1, TCP port 5001
TCP window size: 2.50 MByte (WARNING: requested 2.50 MByte)
18 matches
Mail list logo