Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
That is helpful but I think I am looking at the wrong documentation: http://www.linux-ha.org/wiki/VirtualDomain_(resource_agent) http://linux-ha.org/doc/man-pages/re-ra-VirtualDomain.html Can you point me to the docs you are referencing? - Original Message - From: "RaSca"

[ClusterLabs] HA configuration

2016-02-04 Thread Rishin Gangadharan
Hi All, Could you please help me for the corosync/pacemaker configuration with crmsh. My requirments I have three resources 1. VIP 2. Kamailio 3. Redis DB I want to configure HA for kamailo with VIP and Redis Master/Slave mode.i have configured VIP and kamailio and its

Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread RaSca
If your environment is successfully configured even from the libvirt side, everything should work out of the box, if it does not work you can pass migrate_options to make it work. From the resource agent documentation: migrate_options: Extra virsh options for the guest live migration. You can

Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
I explicitly stated I was using the VirtualDomain resource agent. I did not know it supported live migration as when I ran crm resource move guest nodeX, it shuts down the guest before moving it. Let me rephrase my question... How do I use the VirtualDomain resource agent to live migrate a kvm

Re: [ClusterLabs] kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
Thanks very much CĂ©dric I've added migrate_to/from to my config: primitive tome_kvm ocf:heartbeat:VirtualDomain \ params config="/ocfs2/d01/tome/tome.xml" hypervisor="qemu:///system" migration_transport="ssh" force_stop="false" \ meta allow-migrate="true" target-role="Started" \

[ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Ulrich Windl
>>> Kyle O'Donnell schrieb am 04.02.2016 um 14:17 in Nachricht <124846465.11794.1454591851253.javamail.zim...@0b10.mx>: [...] > I had: > location cli-prefer-tome tome_kvm inf: ny4j1-kvm02 > > removed that and I am all good! [...] That's why I ALWAYS specify a time when migrating

Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
you mean instead of inf: ##: I thought the number was just a preference/priority (lower=higher prior)? - Original Message - From: "Ulrich Windl" To: "users" Sent: Thursday, February 4, 2016 8:47:23 AM Subject: [ClusterLabs]

Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Kyle O'Donnell
great. i think i am sorted now. thanks again everyone. - Original Message - From: "RaSca" To: "users" Sent: Thursday, February 4, 2016 9:23:44 AM Subject: Re: [ClusterLabs] Antw: Re: kvm live migration, resource moving The point here

Re: [ClusterLabs] HA configuration

2016-02-04 Thread emmanuel segura
you need to be sure that your redis resources has master/slave support and I think this colocation need to be invert colocation resource_location1 inf: redis_clone:Master kamailio to colocation resource_location1 inf: kamailio redis_clone:Master You need a order too: order resource_order1

Re: [ClusterLabs] [Announce] libqb 1.0rc2 release (fixed subject)

2016-02-04 Thread Christine Caulfield
On 03/02/16 17:45, Jan PokornĂ˝ wrote: > On 02/02/16 11:05 +, Christine Caulfield wrote: >> I am pleased to announce the second 1.0 release candidate release of >> libqb. Huge thanks to all those who have contributed to this release. > > IIUIC, good news is that so far 1.0.0 is a drop-in

[ClusterLabs] Antw: Re: Antw: Re: kvm live migration, resource moving

2016-02-04 Thread Ulrich Windl
Hi! No, something like "crm resource migrate rsc PT5M". >>> Kyle O'Donnell schrieb am 04.02.2016 um 14:52 in Nachricht <1203120444.11849.1454593972602.javamail.zim...@0b10.mx>: > you mean instead of inf: ##: I thought the number was just a > preference/priority (lower=higher

Re: [ClusterLabs] [OCF] Pacemaker reports a multi-state clone resource instance as running while it is not in fact

2016-02-04 Thread Bogdan Dobrelya
Hello. Regarding the original issue, good news are the resource-agents ocf-shellfuncs is no more causing fork bombs to the dummy OCF RA [0] after the fix [1] done. The bad news are that "self-forking" monitors issue seems remaining for the rabbitmq OCF RA [2], and I can reproduce it for another