Your machines won't come up running, they'll start up from scratch (like if you 
had hit the reset button).

If you want your machines to come up you have to make vmware snapshots, which 
capture the state of the running VM (memory, etc..). Typically this is 
automated with solutions like VCB (Vmware consolidated backup), but I've just 
found http://communities.vmware.com/docs/DOC-8760 (not tested though since we 
are running ESX and have bought VCB licenses).

Bear in mind that vmware won't be able to take a consistent snapshot if some 
disks in the VM come from VMDK files while some other disks are raw LUNs (or 
otherwise mounted directly in the VM, I mean out of control from esx).  You'll 
have to restart the machine from scratch in this case and have a strong 
potential for discrepancies between VMDK and raw luns.
On the other hand, I understand that you want Exchange2007 logs and db to live 
their live so that when you « revert to snapshot » you don't loose all the mail 
that was sent/delivered in between.
So this can be a perfectly valid design depending on how you have set it up.

I don't think snapshots (be they vmware or zfs) are a good tool for failover or 
redundancy here. Basically, if your storage is not accessible from your esxi 
hosts, your VMs are toasted and you have to restart them from scratch.
Please note, I don't know about esxi iscsi retry policies specifics. For ESX we 
use an SVC cluster (2 node FC cluster), so our ESX hosts can always access the 
storage.

You could try to setup an iscsi cluster like this 
http://docs.sun.com/app/docs/doc/820-7821/z40000f557a?a=view (look for the 
figure at the bottom). You would obtain a mirrored pool where you could place 
the vmware zvols. Then you could iscsi-share these zvols.
Though I'm not sure if/how OpenHA could/would failover if one of your node 
fails (I always wanted to play with openHA but don't have the time nor the 
hardware at hand to try it).

This setup of course doesn't prevent you from doing vmware snapshots and zfs 
snapshots, you'll just achieve some level of fault-tolerance.

Please note I don't know anything about using NFS with esx/esxi. Maybe there 
are setups that are easier to achieve using NFS and provide the same (or a 
better) level of fault-tolerance.

Hope this helps,
Arnaud

De : zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] De la part de Tim Cook
Envoyé : mardi 12 janvier 2010 04:36
À : Greg
Cc : zfs-discuss@opensolaris.org
Objet : Re: [zfs-discuss] opensolaris-vmware


On Mon, Jan 11, 2010 at 6:17 PM, Greg 
<gregory.dur...@gmail.com<mailto:gregory.dur...@gmail.com>> wrote:
Hello All,
I hope this makes sense, I have two opensolaris machines with a bunch of hard 
disks, one acts as a iSCSI SAN, and the other is identical other than the hard 
disk configuration. The only thing being served are VMWare esxi raw disks, 
which hold either virtual machines or data that the particular virtual machine 
uses, I.E. we have exchange 2007 virtualized and through its iSCSI initiator we 
are mounting two LUNs one for the database and another for the Logs, all on 
different arrays of course. Any how we are then snapshotting this data across 
the SAN network to the other box using snapshot send/recv. In the case the 
other box fails this box can immediatly serve all of the iSCSI LUNs. The 
problem, I don't really know if its a problem...Is when I snapshot a running vm 
will it come up alive in esxi or do I have to accomplish this in a different 
way. These snapshots will then be written to tape with bacula. I hope I am 
posting this in the correct place.

Thanks,
Greg
--

What you've got are crash consistent snapshots.  The disks are in the same 
state they would be in if you pulled the power plug.  They may come up just 
fine, or they may be in a corrupt state.  If you take snapshots frequently 
enough, you should have at least one good snapshot.  Your other option is 
scripting.  You can build custom scripts to leverage the VSS providers in 
Windows... but it won't be easy.

Any reason in particular you're using iSCSI?  I've found NFS to be much more 
simple to manage, and performance to be equivalent if not better (in large 
clusters).

--
--Tim
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to