Were those tests you mentioned on Raid-5/6/Raid-Z/z2 or on Mirrored
volumes of some kind?
 
We've found here that VM loads on raid 10 sata volumes, with relatively
high numbers of disks actually works pretty well - and depending size of
the drives, you quite often get more usuable space too. ;-)
 
I suspect 80 VM's on 20 sata disks might be pushing things though, but
it'll depend on the workload.
 
T

________________________________

From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Tim Cook
Sent: Tuesday, August 04, 2009 1:18 PM
To: Joachim Sandvik
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Need tips on zfs pool setup..




On Mon, Aug 3, 2009 at 3:34 PM, Joachim Sandvik
<no-re...@opensolaris.org> wrote:


        I am looking at a nas software from nexenta, and after some
initial testing i like what i see. So i think we will find in funding
the budget for a dual setup.
        
        We are looking at a dual cpu Supermicro server with about 32gb
ram and 2 x250gb OS disks, 21 x 1TB SATA disks, and 1 x 64gb SSD disk.
        
        The system will use nexenta's auto-cdp which i think are based
on AVS to remote mirror to a system a few miles away. The system will
mostly be serving as a NFS server for our Vmware servers. We have about
80 vm's who access the vmfs datastores.
        
        I have read that its smart to use a few small raid groups in a
larger pools, but i am uncertain about placing 21 disks in 1pool.
        
        The setup i have though of so far are:
        
        1 pool with 3 x raidz2 groups with 6x1tb disks. 2x 64gb ssd for
cache and 2 spare disks. This should give us about 12TB
        
        An another setup i have been thinking about is:
        
        1 pool with 9 x mirror with 2 x 1TB, also with 2 spares and 2
64gb SSD.
        
        Do anyone have a recommendation on what might be a good setup?


 

FWIW, I think you're nuts putting that many VM's on SATA disk, SSD as
cache or not.  If there's ANY kind of I/O load those disks are going to
fall flat on their face.

VM I/O looks like completely random I/O from the storage perspective,
and it tends to be pretty darn latency sensitive.  Good luck, I'd be
happy to be proven wrong.  Every test I've ever done has shown you need
SAS/FC for vmware workloads though.

--Tim


______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
______________________________________________________________________

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to