> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- > boun...@opensolaris.org] On Behalf Of Wolfraider > > We are looking into the possibility of adding a dedicated ZIL and/or > L2ARC devices to our pool. We are looking into getting 4 – 32GB Intel > X25-E SSD drives. Would this be a good solution to slow write speeds?
If you have slow write speeds, a dedicated log device might help. (log devices are for writes, not for reads.) It sounds like your machine is an iscsi target. In which case, you're certainly doing a lot of sync writes, and therefore hitting your ZIL hard. So it's all but certain adding dedicated log devices will help. One thing to be aware of: Once you add dedicated log, *all* of your sync writes will hit that log device. While a single SSD or pair of SSD's will have fast IOPS, they can easily become a new bottleneck with worse performance than what you had before ... If you've got 80 spindle disks now, and by any chance, you perform sequential sync writes, then a single pair of SSD's won't compete. I'd suggest adding several SSD's for log devices, and no mirroring. Perhaps one SSD for every raidz2 vdev, or every other, or every third, depending on what you can afford. If you have slow reads, l2arc cache might help. (cache devices are for read, not write.) > We are currently sharing out different slices of the pool to windows > servers using comstar and fibrechannel. We are currently getting around > 300MB/sec performance with 70-100% disk busy. You may be facing some other problem, aside from just having cache/log devices. I suggest giving us some more detail here. Such as ... Large sequential operations are good on raidz2. But if you're performing random IO, that performs pretty poor on raidz2. What sort of network are you using? I know you said "comstar and fibrechannel," and "sharing slices to windows" ... I assume this means you're doing iscsi, right? Dual 4Gbit links per server? You're getting 2.4 Gbit and you expect what? You have a pool made up of 18 raidz2 vdev's with 5 drives each (capacity of 3 disks each) ... Is each vdev on its own bus? What type of bus is it? (Generally speaking, it is preferable to spread vdev's across buses, instead of making 1vdev on 1 bus, for reliability purposes) ... How many disks, of what type, on each bus? What type of bus, at what speed? What are the usage characteristics, how are you making your measurement? _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss