Everyone, thank you for the comments, you've given me lots of great info to
research further.

On Mon, Jun 7, 2010 at 15:57, Ross Walker <rswwal...@gmail.com> wrote:

> On Jun 7, 2010, at 2:10 AM, Erik Trimble <erik.trim...@oracle.com> wrote:
>
> Comments in-line.
>
>
> On 6/6/2010 9:16 PM, Ken wrote:
>
> I'm looking at VMWare, ESXi 4, but I'll take any advice offered.
>
> On Sun, Jun 6, 2010 at 19:40, Erik Trimble < <erik.trim...@oracle.com>
> erik.trim...@oracle.com> wrote:
>
>>  On 6/6/2010 6:22 PM, Ken wrote:
>>
>> Hi,
>>
>>  I'm looking to build a virtualized web hosting server environment
>> accessing files on a hybrid storage SAN.  I was looking at using the Sun
>> X-Fire x4540 with the following configuration:
>>
>>    - 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA
>>    drives)
>>    - 2 Intel X-25 32GB SSD's as a mirrored ZIL
>>    - 4 Intel X-25 64GB SSD's as the L2ARC.
>>    - De-duplification
>>    - LZJB compression
>>
>> The clients will be Apache web hosts serving hundreds of domains.
>>
>>  I have the following questions:
>>
>>    - Should I use NFS with all five VM's accessing the exports, or one
>>    LUN for each VM, accessed over iSCSI?
>>
>>     Generally speaking, it depends on your comfort level with running
> iSCSI  Volumes to put the VMs in, or serving everything out via NFS (hosting
> the VM disk file in an NFS filesystem).
>
> If you go the iSCSI route, I would definitely go the "one iSCSI volume per
> VM" route - note that you can create multiple zvols per zpool on the X4540,
> so it's not limiting in any way to volume-ize a VM.  It's a lot simpler,
> easier, and allows for nicer management (snapshots/cloning/etc. on the X4540
> side) if you go with a VM per iSCSI volume.
>
> With NFS-hosted VM disks, do the same thing:  create a single filesystem on
> the X4540 for each VM.
>
>
> Vmware has a 32 mount limit which may limit the OP somewhat here.
>
>
> Performance-wise, I'd have to test, but I /think/ the iSCSI route will be
> faster. Even with the ZIL SSDs.
>
>
> Actually properly tuned they are about the same, but VMware NFS datastores
> are FSYNC on all operations which isn't the best for data vmdk files, best
> to serve the data directly to the VM using either iSCSI or NFS.
>
>
>>    - Are the FSYNC speed issues with NFS resolved?
>>
>>     The ZIL SSDs will compensate for synchronous write issues in NFS.
> Not completely eliminate them, but you shouldn't notice issues with sync
> writing until you're up at pretty heavy loads.
>
>
> You will need this with VMware as every NFS operation (not just file
> open/close) coming out of VMware will be marked FSYNC (for VM data integrity
> in the face of server failure).
>
>
>>
>>     If it were me (and, given what little I know of your data), I'd go
> like this:
>
> (1) pool for VMs:
>         8 disks, MIRRORED
>         1 SSD for L2ARC
>         one Zvol per VM instance, served via iSCSI, each with:
>                 DD turned ON,  Compression turned OFF
>
> (1) pool for clients to write data to (log files, incoming data, etc.)
>         6 or 8 disks, MIRRORED
>         2 SSDs for ZIL, mirrored
>         Ideally, As many filesystems as you have webSITES, not just client
> VMs.  As this might be unwieldy for 100s of websites, you should segregate
> them into obvious groupings, taking care with write/read permissions.
>                 NFS served
>                 DD OFF, Compression ON  (or OFF, if you seem to be having
> CPU overload on the X4540)
>
> (1) pool for client read-only data
>         All the rest of the disks, split into 7 or 8-disk RAIDZ2 vdevs
>         All the remaining SSDs for L2ARC
>         As many filesystems as you have webSITES, not just client VMs.
> (however, see above)
>                 NFS served
>                 DD on for selected websites (filesystems), Compression ON
> for everything
>
> (2) Global hot spares.
>
>
> Make your life easy and use NFS for VMs and data. If you need high
> performance data such as databases, use iSCSI zvols directly into the VM,
> otherwise NFS/CIFS into the VM should be good enough.
>
> -Ross
>
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to