Short answer: because some of us just want the flexibility of taking a node down for maintenance and have enough safety built into the system that we don't care if one node goes haywire. I'll probably retract that statement the first time a node goes haywire in a way that damages data. My quorum connection is over the same network (bond0) as storage and client VM I/O, so I'm trusting that if a node fails in a way that results in split-brain, it'll be self-isolating anyway. I could be wrong, but downtime is only an inconvenience for me right now, not critical. (Having said all that, I do use IPMI fencing because it's available.) -Adam
On Dec 6, 2013 6:58 PM, David Thompson <da...@digitaltransitions.ca> wrote: > > I can’t understand why one would build a HA cluster without fencing. I’ve > just finished building a HA cluster myself with IPMI fencing and I see the > fencing section as being the pivotal point for the whole setup to work > properly. > > I can only assume that if your node(s) go down, then they stay down and you > have to manually interject and power them up. Correct? > > Perhaps for a testing environment it might do, but if you’re testing a system > to deploy in production, wouldn’t one want to have the base (perhaps not > production hardware), but at a minimum the type of same software you’d want > to use? In this case IPMI enabled fencing, NFS backend storage, and proxmox > servers clustered? —> Or something of similar abilities. > > the fence_ipmilan section with the “power_wait” option seems to be the crux > that holds this all together efficiently, at least for those who are using > ipmi for fencing. > > Sorry, not trying to sound sarcastic and I apologize if I do, just trying to > figure out why one would build a system without the security that fencing > provides to HA for failover with the added ability to bring servers back > online automatically… Perhaps you or someone else here can explain this more > to me as to why one would do this. Just trying to get a good solid grasp on > HA clusters and different configurations that can be done with them. > > Thanks, > > David > > On Dec 6, 2013, at 3:23 PM, Gilberto Nunes <gilberto.nune...@gmail.com> wrote: > >> I meant: I get it!!! >> >> That's work pretty awsome!!! >> >> >> 2013/11/10 Gilberto Nunes <gilberto.nune...@gmail.com> >>> >>> That's cool ahmad... >>> >>> I will try it as soon as I can... >>> >>> Thanks >>> >>> >>> 2013/11/10 ahmad imanudin <ahmadiman1...@gmail.com> >>>>> >>>>> >>>>> ---------------------------------------------------------------------- >>>>> >>>>> Message: 1 >>>>> Date: Fri, 8 Nov 2013 11:40:58 -0200 >>>>> From: Gilberto Nunes <gilberto.nune...@gmail.com> >>>>> To: pve-user@pve.proxmox.com >>>>> Subject: [PVE-User] Question about Proxmox HA >>>>> Message-ID: >>>>> >>>>> <caokstbt9qr05nrknypex2ksqjll0d_jabrodyy4n3nuegs3...@mail.gmail.com> >>>>> Content-Type: text/plain; charset=ISO-8859-1 >>>>> >>>>> >>>>> Hello >>>>> >>>>> We have here a cluster created with 3 nodes and a simple storage... >>>>> It's just to test porposes... >>>>> There is no fence devices, for now. >>>>> After setting up all Proxmox Hosts and VM's, when I turn off a VM on >>>>> node1, for example, the VM is started on node2. >>>>> But when I plugged off the network cable, the some doesn't happen! >>>>> >>>>> Why? Because the cluster is without fence device?? >>>>> >>>>> Thanks... >>>> >>>> >>>> Hi Gilberto, >>>> >>>> I Have been testing Proxmox HA without fencing device and works. I test >>>> with 2 nodes Proxmox but have not tried with 3 nodes. I also posting in my >>>> blog about howto : >>>> http://ahmad.imanudin.com/2013/08/18/tips-proxmox-configure-proxmox-high-availability-without-fencing-device/ >>>> with indonesian language :D >>>> >>>> This is my Cluster configuration >>>> ************************************* >>>> cp /etc/pve/cluster.conf /etc/pve/cluster.conf.new >>>> nano /etc/pve/cluster.conf.new >>>> >>>> <?xml version="1.0"?> >>>> <cluster config_version="5" name="excellent"> >>>> <cman expected_votes="1" keyfile="/var/lib/pve-cluster/corosync.authkey" >>>> two_node="1"/> >>>> <fencedevices> >>>> <fencedevice agent="fence_manual" name="human"/> >>>> </fencedevices> >>>> <clusternodes> >>>> <clusternode name="pve1" nodeid="1" votes="1"> >>>> <fence> >>>> <method name="single"> >>>> <device name="human" nodename="pve1"/> >>>> </method> >>>> </fence> >>>> </clusternode> >>>> <clusternode name="pve2" nodeid="2" votes="1"> >>>> <fence> >>>> <method name="single"> >>>> <device name="human" nodename="pve2"/> >>>> </method> >>>> </fence> >>>> </clusternode> >>>> </clusternodes> >>>> </cluster> >>>> >>>> Note : >>>> excellent = My Cluster name >>>> pve1 = Hostname for Proxmox 1 (node 1) >>>> pve2 = Hostname for Proxmox 2 (node 2) >>>> >>>> Importan : You must edit config_version every change cluster configuration. >>>> >>>> Next you can configure VM with clue from here : >>>> http://pve.proxmox.com/wiki/High_Availability_Cluster and try to power off >>>> host Proxmox. >>>> Example : >>>> >>>> I have VM running on Proxmox 1 (node1) and have been configured HA. Forced >>>> off node 1 and run this command from node 2 to take over VM >>>> >>>> fence_ack_manual pve1 (pve1 is hostname node 1) >>>> >>>> after that, you must confirm with answer absolutely >>>> >>>> Thanks >>>> >>>> -- >>>> >>>> >>>> ** >>>> Best Regards, >>>> >>>> >>>> Ahmad Imanudin - Sharing is Beautiful ! >>>> Web : http://ahmad.imanudin.com, YM : ahmad_imanudin >>>> FB : http://facebook.com/imanudin11 Twitter : @ahmad_9111 >>>> >>>> _______________________________________________ >>>> pve-user mailing list >>>> pve-user@pve.proxmox.com >>>> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user >>>> >>> >>> >>> >>> -- >>> Gilberto Ferreira >> >> >> >> >> -- >> Gilberto Ferreira >> _______________________________________________ >> pve-user mailing list >> pve-user@pve.proxmox.com >> http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user > > _______________________________________________ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user