That is a very interesting idea Ryan. Not as ideal as I hoped, but does open up a way of maximizing my amount of vm guests. Thanks for that suggestion. Also if I added another subnet and another vmkernel would I be allowed another 32 nfs mounts? So is it 32 nfs mounts per vmkernel or 32 nfs mounts period?
-- HUGE David Stahl Systems Administrator 718 233 9164 / F 718 625 5157 www.hugeinc.com <http://www.hugeinc.com> > From: Ryan Arneson <ryan.arne...@sun.com> > Date: Tue, 16 Jun 2009 15:14:31 -0600 > To: HUGE | David Stahl <dst...@hugeinc.com> > Cc: <zfs-discuss@opensolaris.org> > Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my! > > HUGE | David Stahl wrote: >> I'm curious if anyone else has run into this problem, and if so, what >> solutions they use to get around it. >> We are using Vmware Esxi servers with an Opensolaris NFS backend. >> This allows us to leverage all the awesomeness of ZFS, including the >> snapshots and clones. The best feature of this is we can create a >> vmware guest template (centos/ubuntu/win/whatever) and use the >> snapshot/cloning to make an instant copy of that machine, and it will >> hardly take any additional space (initially that is). Everything is great. >> My issue is that vmware esx only allows you a limit of 32 nfs >> mounts. And because of this we can't seem to get any more than 32 >> servers (from nfs). Every single vmware machine is it's own zvol. I >> tried making my nfs mount to higher zvol level. But I cannot traverse >> to the sub-zvols from this mount. >> Another thing I tried was adding another nic to the vmware server. >> But you cannot have more than one vmkernel on the same subnet. >> >> Does anyone have any experience with overcoming these limitations? > I understand what you are trying to do, but yes, Vmware has that 32 > mount point limit (64 in vSphere 4.0 I believe). My only suggestion is > to put more VMs in a single mountpoint. And Vmware does not support > NFSv4 mirror mounts so you can't try mounting them under subdirs. > > So in your case of using this for quick clone deployment, create your > golden image and then use Vmware to clone that another 5-10 times (or > whatever, an NFS share can handle a larger number of VM than a FC/iSCSI > VMFS3 datastore) in that same NFS mountpoint. Then when you snap/clone > on the array side, your granularity will be in group of 5-10 (or more). > For example, if you put 10 images in a single "golden" NFS share and > clone that 32 times you can get 320 VMs. 20 per directory would give you > 640. > > Not ideal, you'll have to worrying about updating 5-10 images when you > refresh patches/apps, instead of one, but that's the VMware limitation > we are dealing with. > > We'd really like to see Vmware support NFSv4 mirror-mounts in the > future, but they have not commented on their plans there. > > -ryan > > > > >> >> -- >> HUGE >> >> David Stahl >> Systems Administrator >> 718 233 9164 / F 718 625 5157 >> >> www.hugeinc.com <http://www.hugeinc.com> >> ------------------------------------------------------------------------ >> >> _______________________________________________ >> zfs-discuss mailing list >> zfs-discuss@opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >> > > > -- > Ryan Arneson > Sun Microsystems, Inc. > 303-223-6264 > ryan.arne...@sun.com > http://blogs.sun.com/rarneson > _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss