> -----Original Message-----
> From: Edward Ned Harvey [mailto:sh...@nedharvey.com] 
> Sent: Friday, November 19, 2010 8:03 AM
> To: Saxon, Will; 'Günther'; zfs-discuss@opensolaris.org
> Subject: RE: [zfs-discuss] Faster than 1G Ether... ESX to ZFS
> 
> > From: Saxon, Will [mailto:will.sa...@sage.com]
> > 
> > In order to do this, you need to configure passthrough for 
> the device at
> the
> > host level (host -> configuration -> hardware -> advanced 
> settings). This
> 
> Awesome.  :-)
> The only problem is that once a device is configured to 
> pass-thru to the
> guest VM, then that device isn't available for the host 
> anymore.  So you
> have to have your boot disks on a separate controller from the primary
> storage disks that are pass-thru to the guest ZFS server.
> 
> For a typical ... let's say dell server ... that could be a 
> problem.  The
> boot disks would need to hold ESXi plus a ZFS server and then you can
> pass-thru the primary hotswappable storage HBA to the ZFS 
> guest.  Then the
> ZFS guest can export its storage back to the ESXi host via 
> NFS or iSCSI...
> So all the remaining VM's can be backed by ZFS.  Of course you have to
> configure ESXi to boot the ZFS guest before any of the other guests.
> 
> The problem is just the boot device.  One option is to boot from a USB
> dongle, but that's unattractive for a lot of reasons.  
> Another option would
> be a PCIe storage device, which isn't too bad an idea.  
> Anyone using PXE to
> boot ESXi?
> 
> Got any other suggestions?  In a typical dell server, there 
> is no place to
> put a disk, which isn't attached via the primary hotswappable 
> storage HBA.
> I suppose you could use a 1U rackmount server with only 2 
> internal disks,
> and add a 2nd HBA with external storage tray, to use as 
> pass-thru to the ZFS
> guest.

Well, with 4.1 ESXi does support boot from SAN. I guess that still presents a 
chicken-and-egg problem in this scenario, but maybe you have another san 
somewhere you can boot from.

Also, most of the big name vendors have a USB or SD option for booting ESXi. I 
believe this is the 'ESXi Embedded' flavor vs. the typical 'ESXi Installable' 
that we're used to. I don't think it's a bad idea at all. I've got a 
not-quite-production system I'm booting off USB right now, and while it takes a 
really long time to boot it does work. I think I like the SD card option better 
though.

What I am wondering is whether this is really worth it. Are you planning to 
share the storage out to other VM hosts, or are all the VMs running on the host 
using the 'local' storage? I know we like ZFS vs. traditional RAID and volume 
management, and I get that being able to boot any ZFS-capable OS is good for 
disaster recovery, but what I don't get is how this ends up working better than 
a larger dedicated ZFS system and a storage network. Is it cheaper over several 
hosts? Are you getting better performance through e.g. the vmxnet3 adapter and 
NFS than you would just using the disks directly?

-Will
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to