Hi,

i think, the local ZFS filesystem with raidz on the 7210 is not the problem (when there are fast HDs), but you can test it with e.g. bonnie++ (downloadable at sunfreeware.com), also NFS should not be the problem because iscsi is also very slow(isnĀ“t it?).

some other ideas are:

Network connection (did you test the network speed to the NAS?), maybe upgrade to 10gbit, when it is the bottleneck. You can test the speed/bandwith, when log on an ESX host via ssh and create a bigger (10GByte) virtual disk (vmdk) on an NFS mounted share (time /usr/sbin/vmkfstools -c 10G -d eagerzeroedthick /nfspath/test.vmdk).

It is also possible, that the VMs are the bottleneck, VM-guests with heavy small (virtual-)HD access like databases can also penetrade a NAS and the network connection with many small IP-packets, so an 1GBit connection could be to slow (but virtualiziation of bigger databases with many access is not a good idea).

When you have a test-NAS you can test varios thinks like disabling ZIL and let run a VM on this NAS.

i hope i could help you a little, we have also VSphere 4 with a Solaris 10 NAS (NFS) and it runs very fine, but only VMs w/o or with small databases and a Raid-Controller with BBU-write cache and Raid 5

regards (sorry for my english ;-)
Axel Denfeld



Mark schrieb:
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets.  When I 
installed I selected the best bang for the buck on the speed vs capacity chart.

We run about 30 VM's on it, across 3 ESX 4 servers.  Right now, its all running 
NFS, and it sucks... sooo slow.

iSCSI was no better.
I am wondering how I can increase the performance, cause they want to add more 
vm's... the good news is most are idleish, but even idle vm's create a lot of 
random chatter to the disks!

So a few options maybe...
1) Change to iSCSI mounts to ESX, and enable write-cache on the LUN's since the 
7210 is on a UPS.
2) get a Logzilla SSD mirror.  (do ssd's fail, do I really need a mirror?)
3) reconfigure the NAS to a RAID10 instead of RAIDz

Obviously all 3 would be ideal , though with a SSD can I keep using NFS for the 
same performance since the R_SYNC's would be satisfied with the SSD?

I am dreadful of getting the OK to spend the $$,$$$ SSD's and then not get the 
performance increase we want.

How would you weight these?  I noticed in testing on a 5 disk OpenSolaris, that 
changing from a single RAIDz pool to RAID10 netted a larger IOP increase then 
adding an Intel SSD as a Logzilla.  That's not going to scale the same though 
with a 44 disk, 11 raidz striped RAID set.

Some thoughts?  Would simply moving to write-cache enabled iSCSI LUN's without 
a SSD speed things up a lot by itself?

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to