Tim Cook wrote:
On Fri, Oct 22, 2010 at 10:40 PM, Haudy Kazemi <kaze0...@umn.edu
<mailto:kaze0...@umn.edu>> wrote:
One thing suspicious is that we notice a slow down of one pool
when the other is under load. How can that be?
Ian
A network switch that is being maxed out? Some switches cannot
switch at rated line speed on all their ports all at the same
time. Their internal buses simply don't have the bandwidth needed
for that. Maybe you are running into that limit? (I know you
mentioned bypassing the switch completely in some other tests and
not noticing any difference.)
Any other hardware in common?
There's almost 0 chance a switch is being overrun by a single gigE
connection. The worst switch I've seen is roughly 8:1 oversubscribed.
You'd have to be maxing out many, many ports for a switch to be a
problem.
Likely you don't have enough ram or CPU in the box.
--Tim
I agree, but also trying not to assume anything. Looking back, Ian's
first email said '10GbE on a dedicated switch'. I don't think the
switch model was ever identified...perhaps it is a 1 GbE switch with a
few 10 GbE ports? (Drawing at straws.)
What happens when Windows is the iSCSI initiator connecting to an iSCSI
target on ZFS? If that is also slow, the issue is likely not in Windows
or in Linux.
Do CIFS shares (connected to from Linux and from Windows) show the same
performance problems as iSCSI and NFS? If yes, this would suggest a
common cause item on the ZFS side.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss