Dear all,

I have a hardware based array storage with a capacity of 192TB and being sliced into 64 LUNs of 3TB. What will be the best way to configure the ZFS on this? Of course we are not requiring the self healing capability of the ZFS. We just want the capability of handling big size file system and performance.

Currently we are running using Solaris 10 May 2009 (Update 7), and configure the ZFS where :
a. 1 hardware LUN (3TB) will become 1 zpool
b. 1 zpool will become 1 ZFS file system
c. 1 ZFS file system will become 1 mountpoint (obviously).

The problem we have is that when the customer runs the I/O in parallel to the 64 file systems, the kernel usage (%sys) shot up very high to the 90% region and the IOPS level is degrading. It can be seen also that during that time the storage's own front end CPU does not change much, which means the bottleneck is not on the hardware storage level, but somewhere inside the Solaris box.

Is there any experience of having the similar setup like the one I have? Or anybody can point me to an information on what will be the best way to deal with the hardware storage on this size?

Please advice and thanks in advance,

Dedhi
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to