> -----Original Message----- > From: zfs-discuss-boun...@opensolaris.org > [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Ian D > Sent: Friday, October 15, 2010 4:19 PM > To: zfs-discuss@opensolaris.org > Subject: Re: [zfs-discuss] Performance issues with iSCSI under Linux > > A little setback.... We found out that we also have the > issue with the Dell H800 controllers, not just the LSI > 9200-16e. With the Dell it's initially faster as we benefit > from the cache, but after a little while it goes sour- from > 350MB/sec down to less than 40MB/sec. We've also tried with > a LSI 9200-8e with the same results. > > So to recap... No matter what HBA we use, copying through > the network to/from the external drives is painfully slow > when access is done through either NFS or iSCSI. HOWEVER, it > is plenty fast when we do a scp where the data is written to > the external drives (or internal ones for that matter) when > they are seen by the Nexenta box as local drives- ie when > neither NFS or iSCSI are involved.
Has anyone suggested either removing L2ARC/SLOG entirely or relocating them so that all devices are coming off the same controller? You've swapped the external controller but the H700 with the internal drives could be the real culprit. Could there be issues with cross-controller IO in this case? Does the H700 use the same chipset/driver as the other controllers you've tried? I don't have a good understanding of where the various software components here fit together, but it seems like the problem is not with the controller(s) but with whatever is queueing network IO requests to the storage subsystem (or controlling queues/buffers/etc for this). Do NFS and iSCSI share a code path for this? -Will _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss