Our config is:
OpenSolaris snv_118 x64
1 x LSISAS3801E controller
2 x 23-disk JBOD (fully populated, 1TB 7.2k SATA drives)
Each of the two external ports on the LSI connects to a 23-disk JBOD. ZFS-wise 
we use 1 zpool with 2 x 22-disk raidz2 vdevs (1 vdev per JBOD). Each zpool has 
one ZFS filesystem containing millions of files/directories. This data is 
served up via CIFS (kernel), which is why we went with snv_118 (first release 
post-2009.06 that had stable CIFS server). Like I mentioned to James, we know 
that the server won't be a star performance-wise especially because of the wide 
vdevs but it shouldn't hiccup under load either. A guaranteed way for us to 
cause these IO errors is to load up the zpool with about 30 TB of data (90% 
full) then scrub it. Within 30 minutes we start to see the errors, which 
usually evolves into "failing" disks (because of excessive retry errors) which 
just makes things worse.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to