You're suffering from a terrible misconfiguration in my opinion.
RAIDZ2 is at best going to get the performance of only a single spindle,
but built on top of iSCSI disks like this its going to be far worse
because its going to suffer terribly from *latency* associated with the
network as ZFS flushes out parity across the network.
Using a RAID (of nearly any kind) made of constituent disks that are
located across a high-latency interconnect (like any IP network) is
going to *kill* performance.
Using RAIDZ2 on *local* disks (or one with a low latency interconnect)
combined with some level of mirroring, will give far better
performance. You could then use iSCSI to export the RAIDZ2 volume from
the storage head.
- Garrett
On 02/10/10 02:06 PM, Brian E. Imhoff wrote:
I am in the proof-of-concept phase of building a large ZFS/Solaris based SAN
box, and am experiencing absolutely poor / unusable performance.
Where to begin...
The Hardware setup:
Supermicro 4U 24 Drive Bay Chassis
Supermicro X8DT3 Server Motherboard
2x Xeon E5520 Nehalem 2.26 Quad Core CPUs
4GB Memory
Intel EXPI9404PT 4 port 1000GB Server Network Card (used for ISCSI traffic only)
Adaptec 52445 28 Port SATA/SAS Raid Controller connected to
24x Western Digital WD1002FBYS 1TB Enterprise drives.
I have configured the 24 drives as single simple volumes in the Adeptec RAID
BIOS , and are presenting them to the OS as such.
I then, Create a zpool, using raidz2, using all 24 drives, 1 as a hotspare:
zpool create tank raidz2 c1t0d0 c1t1d0 [....] c1t22d0 spare c1t23d00
Then create a volume store:
zfs create -o canmount=off tank/volumes
Then create a 10 TB volume to be presented to our file server:
zfs create -V 10TB -o shareiscsi=on tank/volumes/fsrv1data
From here, I discover the iscsi target on our Windows server 2008 R2 File
server, and see the disk is attached in Disk Management. I initialize the 10TB
disk fine, and begin to quick format it. Here is where I begin to see the poor
performance issue. The Quick Format took about 45 minutes. And once the disk
is fully mounted, I get maybe 2-5 MB/s average to this disk.
I have no clue what I could be doing wrong. To my knowledge, I followed the
documentation for setting this up correctly, though I have not looked at any
tuning guides beyond the first line saying you shouldn't need to do any of this
as the people who picked these defaults know more about it then you.
Jumbo Frames are enabled on both sides of the iscsi path, as well as on the
switch, and rx/tx buffers increased to 2048 on both sides as well. I know this
is not a hardware / iscsi network issue. As another test, I installed
Openfiler in a similar configuration (using hardware raid) on this box, and was
getting 350-450 MB/S from our fileserver,
An "iostat -xndz 1" readout of the "%b% coloum during a file copy to the LUN
shows maybe 10-15 seconds of %b at 0 for all disks, then 1-2 seconds of 100, and repeats.
Is there anything I need to do to get this usable? Or any additional
information I can provide to help solve this problem? As nice as Openfiler is,
it doesn't have ZFS, which is necessary to achieve our final goal.
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code