On 07/07/2009, at 8:20 PM, James Andrewartha wrote:

Have you tried putting the slog on this controller, either as an SSD or regular disk? It's supported by the mega_sas driver, x86 and amd64 only.

What exactly are you suggesting here? Configure one disk on this array as a dedicated ZIL? Would that improve performance any over using all disks with an internal ZIL?

I have now done some tests with the PERC6/E in both RAID10 (all devices RAID0 LUNs, ZFS mirror/striped config) and also as a hardware RAID5 both with an internal ZIL.

RAID10 (10 disks, 5 mirror vdevs)
create 2m14.448s
unlink  0m54.503s

RAID5 (9 disks, 1 hot spare)
create 1m58.819s
unlink 0m48.509s

Unfortunately, linux on the same RAID5 array using XFS seems significantly faster, still.

Linux RAID5 (9 disks, 1 hot spare), XFS
create 1m30.911s
unlink 0m38.953s

Is there a way to disable the write barrier in ZFS in the way you can with Linux filesystems (-o barrier=0)? Would this make any difference?

After much consideration, the lack of barrier capability makes no difference to filesystem stability in the scenario where you have a battery backed write cache.

Due to using identical hardware and configurations, I think this is a fair apples to apples test now. I'm now wondering if XFS is just the faster filesystem... (not the most practical management solution, just speed).

cheers,
James

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to