[ClusterLabs] Antw: changing default cib.xml directory
>>> Christopher Harveyschrieb am 13.12.2016 um 16:57 in Nachricht <1481644670.3264872.817667121.13e97...@webmail.messagingengine.com>: > I was wondering if it is possible to tell pacemaker to store the cib.xml > file in a specific directory. I looked at the code and searched the web > a bit and haven't found anything. I just wanted to double check here in > case I missed anything. What about a symbolic link? ANd why do you need to customize? > > Thanks, > Chris > > ___ > Users mailing list: Users@clusterlabs.org > http://lists.clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] Antwort: Re: Antwort: Re: clone resource - pacemaker remote
On 12/07/2016 06:26 AM, philipp.achmuel...@arz.at wrote: >> Von: Ken Gaillot>> An: philipp.achmuel...@arz.at, Cluster Labs - All topics related to >> open-source clustering welcomed >> Datum: 05.12.2016 17:38 >> Betreff: Re: Antwort: Re: [ClusterLabs] clone resource - pacemaker remote >> >> On 12/05/2016 09:20 AM, philipp.achmuel...@arz.at wrote: >> > Ken Gaillot schrieb am 02.12.2016 19:27:09: >> > >> >> Von: Ken Gaillot >> >> An: users@clusterlabs.org >> >> Datum: 02.12.2016 19:32 >> >> Betreff: Re: [ClusterLabs] clone resource - pacemaker remote >> >> >> >> On 12/02/2016 07:08 AM, philipp.achmuel...@arz.at wrote: >> >> > hi, >> >> > >> >> > what is best way to prevent clone resource trying to run on > remote/guest >> >> > nodes? >> >> >> >> location constraints with a negative score: >> >> >> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ >> >> > Pacemaker_Explained/index.html#_deciding_which_nodes_a_resource_can_run_on >> >> >> >> >> >> you can even use a single constraint with a rule based on #kind ne >> >> cluster, so you don't need a separate constraint for each node: >> >> >> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/ >> >> Pacemaker_Explained/index.html#_node_attribute_expressions >> >> >> >> >> >> alternatively, you can set symmetric-cluster=false and use positive >> >> constraints for cluster nodes only >> >> >> > >> > set constraint to single primitive, group, or on clone resource? >> > are there any advantages/disadvantages using one of these methods? >> >> When a resource is cloned, you want to refer to the clone name in any >> constraints, rather than the primitive name. >> >> For a group, it doesn't really matter, but it's simplest to use the >> group name in constraints -- mainly that keeps you from accidentally >> setting conflicting constraints on different members of the group. And >> of course group members are automatically ordered/colocated with each >> other, so you don't need individual constraints for that. >> > > set location constraint to group didn't work: > > ERROR: error: unpack_location_tags:Constraint > 'location-base-group': Invalid reference to 'base-group' Maybe a syntax error in your command, or a bug in the tool you're using? The CIB XML is fine with something like this: > for clone it works like expected. > but crm_mon is showing "disallowed" set as "stopped". is this "works as > designed" or how to prevent this? You asked it to :) -r == show inactive resources > > crm configure show > ... > location location-base-clone base-clone resource-discovery=never \ >rule -inf: #kind ne cluster > ... > > crm_mon -r > Clone Set: base-clone [base-group] > Started: [ lnx0223a lnx0223b ] > Stopped: [ vm-lnx0106a vm-lnx0107a ] > >> > >> >> > >> >> > ... >> >> > node 167873318: lnx0223a \ >> >> > attributes maintenance=off >> >> > node 167873319: lnx0223b \ >> >> > attributes maintenance=off >> >> > ... >> >> > /primitive vm-lnx0107a VirtualDomain \/ >> >> > /params hypervisor="qemu:///system" >> >> > config="/etc/kvm/lnx0107a.xml" \/ >> >> > /meta remote-node=lnx0107a238 \/ >> >> > /utilization cpu=1 hv_memory=4096/ >> >> > /primitive remote-lnx0106a ocf:pacemaker:remote \/ >> >> > /params server=xx.xx.xx.xx \/ >> >> > /meta target-role=Started/ >> >> > /group base-group dlm clvm vg1/ >> >> > /clone base-clone base-group \/ >> >> > /meta interleave=true target-role=Started/ >> >> > /.../ >> >> > >> >> > /Dec 1 14:32:57 lnx0223a crmd[9826]: notice: Initiating start >> >> > operation dlm_start_0 on lnx0107a238/ >> >> > /Dec 1 14:32:58 lnx0107a pacemaker_remoted[1492]: notice: > executing - >> >> > rsc:dlm action:start call_id:7/ >> >> > /Dec 1 14:32:58 lnx0107a pacemaker_remoted[1492]: notice: > finished - >> >> > rsc:dlm action:start call_id:7 exit-code:5 exec-time:16ms >> > queue-time:0ms/ >> >> > /Dec 1 14:32:58 lnx0223b crmd[9328]:error: Result of start >> >> > operation for dlm on lnx0107a238: Not installed/ >> >> > /Dec 1 14:32:58 lnx0223a crmd[9826]: warning: Action 31 > (dlm_start_0) >> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/ >> >> > /Dec 1 14:32:58 lnx0223a crmd[9826]: warning: Action 31 > (dlm_start_0) >> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/ >> >> > /Dec 1 14:34:07 lnx0223a pengine[9824]: warning: Processing > failed op >> >> > start for dlm:2 on lnx0107a238: not installed (5)/ >> >> > /Dec 1 14:34:07 lnx0223a pengine[9824]: warning: Processing > failed op >> >> > start for dlm:2 on lnx0107a238: not installed (5)/ >> >> > /.../ >> >> > /Dec 1 14:32:49 lnx0223a pengine[9824]: notice: Start >> >> > dlm:3#011(remote-lnx0106a)/ >> >> > /Dec 1 14:32:49 lnx0223a crmd[9826]: notice: Initiating monitor >> >> > operation dlm_monitor_0 locally on remote-lnx0106a/ >> >> > /Dec 1
Re: [ClusterLabs] changing default cib.xml directory
On 12/13/2016 09:57 AM, Christopher Harvey wrote: > I was wondering if it is possible to tell pacemaker to store the cib.xml > file in a specific directory. I looked at the code and searched the web > a bit and haven't found anything. I just wanted to double check here in > case I missed anything. > > Thanks, > Chris Only when building the source code, with ./configure --localstatedir=DIR which defaults to /var (pacemaker will always add /lib/pacemaker/cib to it) ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[ClusterLabs] changing default cib.xml directory
I was wondering if it is possible to tell pacemaker to store the cib.xml file in a specific directory. I looked at the code and searched the web a bit and haven't found anything. I just wanted to double check here in case I missed anything. Thanks, Chris ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] Antwort: Re: Antwort: Re: hawk - pacemaker remote
On 12/13/2016 05:26 AM, philipp.achmuel...@arz.at wrote: > >> Von: Kristoffer Grönlund>> An: philipp.achmuel...@arz.at, kgail...@redhat.com, Cluster Labs - >> All topics related to open-source clustering welcomed > >> Datum: 12.12.2016 16:13 >> Betreff: Re: [ClusterLabs] Antwort: Re: hawk - pacemaker remote >> >> philipp.achmuel...@arz.at writes: >> >> > >> > tried several things, didn't get this working. >> > Any examples how to configure this? >> > Also how to configure for VirtualDomain with remote_node enabled >> > >> > thank you! >> > >> >> Without any details, it is difficult to help - what things did you try, >> what does "not working" mean? Hawk can show remote nodes, but it only >> shows them if they have entries in the nodes section of the >> configuration (as Ken said). >> > > hi, > > this is my current testsystem: > > /Online: [ lnx0223a lnx0223b ]/ > /GuestOnline: [ vm-lnx0106a@lnx0223b vm-lnx0107a@lnx0223a ]/ > > /Full list of resources:/ > > /stonith_sbd (stonith:external/sbd): Started lnx0223a/ > / Clone Set: base-clone [base-group]/ > / Started: [ lnx0223a lnx0223b ]/ > / Stopped: [ vm-lnx0106a vm-lnx0107a ]/ > /FAKE1 (ocf::pacemaker:Dummy): Started lnx0223b/ > / Clone Set: post-clone [postfix-service]/ > / Started: [ vm-lnx0106a vm-lnx0107a ]/ > / Stopped: [ lnx0223a lnx0223b ]/ > /fence-lnx0106a (stonith:external/libvirt): Started lnx0223b/ > /fence-lnx0107a (stonith:external/libvirt): Started lnx0223a/ > /lnx0106a(ocf::heartbeat:VirtualDomain): Started lnx0223b/ > /lnx0107a(ocf::heartbeat:VirtualDomain): Started lnx0223a/ > /remote-paceip (ocf::heartbeat:IPaddr):Started vm-lnx0106a/ > > node section shows: > /.../ > // > / / > // > / value="off"/>/ > // > / / > / / > // > / value="off"/>/ > // > / / > / / > // > / value="true"/>/ > // > / / > / / > // > / value="true"/>/ > // > / / > // > /.../ > > > Hawk/Status still show remote nodes as "Offline" Ah, it's good then ;) Getting them to show up at all is the main goal. I'm going to guess it pulls the status from the node state section; that has been maintained for remote nodes only since Pacemaker 1.1.15, and there are still some corner cases being addressed. Cluster nodes' state is learned automatically from corosync, but remote nodes have no such mechanism, so tracking state is much trickier. ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
Re: [ClusterLabs] Get rid of reload altogether
Jan, Could you pls elaborate? Currently we are thinking of running a script that will generate the list of attributes after reading from another file . But these are running into 3000+ parameters. :( It will be a huge effort to maintain it. All I want is Pacemaker not to all stop/start when resource attributes change. Would it be easier to modify the pacemaker source code and ignore this change of value? -Regards Nikhil On Wed, Nov 30, 2016 at 8:46 PM, Jan Pokornýwrote: > On 28/11/16 09:44 +0530, Nikhil Utane wrote: > > I understand the whole concept of reload and how to define parameters > with > > unique=0 so that pacemaker can call the reload operation of the OCF > script > > instead of stopping and starting the resource. > > Now my problem is that I have 100s of parameters and I don't want to > > specify each of those with unique=0. > > Would it be doable that your agent, when asked for metadata, will > produce them as usual, but in addition runs the XML through XSL > template that will add these unique=0 declarations for you > (except perhaps some whitelist)? > > -- > Jan (Poki) > > ___ > Users mailing list: Users@clusterlabs.org > http://clusterlabs.org/mailman/listinfo/users > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org > > ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[ClusterLabs] Antwort: Re: Antwort: Re: hawk - pacemaker remote
> Von: Kristoffer Grönlund> An: philipp.achmuel...@arz.at, kgail...@redhat.com, Cluster Labs - > All topics related to open-source clustering welcomed > Datum: 12.12.2016 16:13 > Betreff: Re: [ClusterLabs] Antwort: Re: hawk - pacemaker remote > > philipp.achmuel...@arz.at writes: > > > > > tried several things, didn't get this working. > > Any examples how to configure this? > > Also how to configure for VirtualDomain with remote_node enabled > > > > thank you! > > > > Without any details, it is difficult to help - what things did you try, > what does "not working" mean? Hawk can show remote nodes, but it only > shows them if they have entries in the nodes section of the > configuration (as Ken said). > hi, this is my current testsystem: Online: [ lnx0223a lnx0223b ] GuestOnline: [ vm-lnx0106a@lnx0223b vm-lnx0107a@lnx0223a ] Full list of resources: stonith_sbd (stonith:external/sbd): Started lnx0223a Clone Set: base-clone [base-group] Started: [ lnx0223a lnx0223b ] Stopped: [ vm-lnx0106a vm-lnx0107a ] FAKE1 (ocf::pacemaker:Dummy): Started lnx0223b Clone Set: post-clone [postfix-service] Started: [ vm-lnx0106a vm-lnx0107a ] Stopped: [ lnx0223a lnx0223b ] fence-lnx0106a (stonith:external/libvirt): Started lnx0223b fence-lnx0107a (stonith:external/libvirt): Started lnx0223a lnx0106a(ocf::heartbeat:VirtualDomain): Started lnx0223b lnx0107a(ocf::heartbeat:VirtualDomain): Started lnx0223a remote-paceip (ocf::heartbeat:IPaddr):Started vm-lnx0106a node section shows: ... ... Hawk/Status still show remote nodes as "Offline" thank you! > Cheers, > Kristoffer > > -- > // Kristoffer Grönlund > // kgronl...@suse.com ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
[ClusterLabs] Antw: [cluster-lab] reboot standby node
Could that be the result of a failed migration combined with a bad monitor? >>> Omar Jaberschrieb am 11.12.2016 um 23:19 in Nachricht : > Hi all , > I have cluster contains three nodes with different sore for location > constrain and I have group resource (it's a service exsists in > /etc/init.d/ folder) > Running on the node the have the highest score for location > constrain when I try to reboot one of the standby nodeI see when the > standby node become up the resource stopped in master node and restart > againafter I check the pacemaker status I see the following error : > "error: resource 'resource_name' is active on 2 nodes attempting recovery " > Then I disables the pcs cluster service in boot t time in standby node by > run the command " pcs cluster disable " then I reboot the node and I see > the resource is started in standby node ( because the resource stored in > /etc/init.d folder) > After that I run the pcs cluster service in standby node and I see the > same error is generated > "error: resource 'resource_name' is active on 2 nodes attempting recovery " > > The problem is without reboot standby node this problem not happen for > example > If I stop pcs cluster service in standby , run the resource in standby > node , then I start pcs cluster > The error "error: resource 'resource_name' is active on 2 nodes > attempting recovery " not generated in this case. > > > Thanks ___ Users mailing list: Users@clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org