Re: [ClusterLabs] Antwort: Re: Antwort: Re: clone resource - pacemaker remote

2016-12-13 Thread Ken Gaillot
On 12/07/2016 06:26 AM, philipp.achmuel...@arz.at wrote:
>> Von: Ken Gaillot 
>> An: philipp.achmuel...@arz.at, Cluster Labs - All topics related to
>> open-source clustering welcomed 
>> Datum: 05.12.2016 17:38
>> Betreff: Re: Antwort: Re: [ClusterLabs] clone resource - pacemaker remote
>>
>> On 12/05/2016 09:20 AM, philipp.achmuel...@arz.at wrote:
>> > Ken Gaillot  schrieb am 02.12.2016 19:27:09:
>> >
>> >> Von: Ken Gaillot 
>> >> An: users@clusterlabs.org
>> >> Datum: 02.12.2016 19:32
>> >> Betreff: Re: [ClusterLabs] clone resource - pacemaker remote
>> >>
>> >> On 12/02/2016 07:08 AM, philipp.achmuel...@arz.at wrote:
>> >> > hi,
>> >> >
>> >> > what is best way to prevent clone resource trying to run on
> remote/guest
>> >> > nodes?
>> >>
>> >> location constraints with a negative score:
>> >>
>> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
>> >>
> Pacemaker_Explained/index.html#_deciding_which_nodes_a_resource_can_run_on
>> >>
>> >>
>> >> you can even use a single constraint with a rule based on #kind ne
>> >> cluster, so you don't need a separate constraint for each node:
>> >>
>> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
>> >> Pacemaker_Explained/index.html#_node_attribute_expressions
>> >>
>> >>
>> >> alternatively, you can set symmetric-cluster=false and use positive
>> >> constraints for cluster nodes only
>> >>
>> >
>> > set constraint to single primitive, group, or on clone resource?
>> > are there any advantages/disadvantages using one of these methods?
>>
>> When a resource is cloned, you want to refer to the clone name in any
>> constraints, rather than the primitive name.
>>
>> For a group, it doesn't really matter, but it's simplest to use the
>> group name in constraints -- mainly that keeps you from accidentally
>> setting conflicting constraints on different members of the group. And
>> of course group members are automatically ordered/colocated with each
>> other, so you don't need individual constraints for that.
>>
> 
> set location constraint to group didn't work:
> 
> ERROR: error: unpack_location_tags:Constraint
> 'location-base-group': Invalid reference to 'base-group'

Maybe a syntax error in your command, or a bug in the tool you're using?
The CIB XML is fine with something like this:



> for clone it works like expected.
> but crm_mon is showing "disallowed" set as "stopped". is this "works as
> designed" or how to prevent this?

You asked it to :)

-r == show inactive resources

> 
> crm configure show
> ...
> location location-base-clone base-clone resource-discovery=never \
>rule -inf: #kind ne cluster
> ...
> 
> crm_mon -r
>  Clone Set: base-clone [base-group]
>  Started: [ lnx0223a lnx0223b ]
>  Stopped: [ vm-lnx0106a vm-lnx0107a ]
> 
>> >
>> >> >
>> >> > ...
>> >> > node 167873318: lnx0223a \
>> >> > attributes maintenance=off
>> >> > node 167873319: lnx0223b \
>> >> > attributes maintenance=off
>> >> > ...
>> >> > /primitive vm-lnx0107a VirtualDomain \/
>> >> > /params hypervisor="qemu:///system"
>> >> > config="/etc/kvm/lnx0107a.xml" \/
>> >> > /meta remote-node=lnx0107a238 \/
>> >> > /utilization cpu=1 hv_memory=4096/
>> >> > /primitive remote-lnx0106a ocf:pacemaker:remote \/
>> >> > /params server=xx.xx.xx.xx \/
>> >> > /meta target-role=Started/
>> >> > /group base-group dlm clvm vg1/
>> >> > /clone base-clone base-group \/
>> >> > /meta interleave=true target-role=Started/
>> >> > /.../
>> >> >
>> >> > /Dec  1 14:32:57 lnx0223a crmd[9826]:   notice: Initiating start
>> >> > operation dlm_start_0 on lnx0107a238/
>> >> > /Dec  1 14:32:58 lnx0107a pacemaker_remoted[1492]:   notice:
> executing -
>> >> > rsc:dlm action:start call_id:7/
>> >> > /Dec  1 14:32:58 lnx0107a pacemaker_remoted[1492]:   notice:
> finished -
>> >> > rsc:dlm action:start call_id:7  exit-code:5 exec-time:16ms
>> > queue-time:0ms/
>> >> > /Dec  1 14:32:58 lnx0223b crmd[9328]:error: Result of start
>> >> > operation for dlm on lnx0107a238: Not installed/
>> >> > /Dec  1 14:32:58 lnx0223a crmd[9826]:  warning: Action 31
> (dlm_start_0)
>> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/
>> >> > /Dec  1 14:32:58 lnx0223a crmd[9826]:  warning: Action 31
> (dlm_start_0)
>> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/
>> >> > /Dec  1 14:34:07 lnx0223a pengine[9824]:  warning: Processing
> failed op
>> >> > start for dlm:2 on lnx0107a238: not installed (5)/
>> >> > /Dec  1 14:34:07 lnx0223a pengine[9824]:  warning: Processing
> failed op
>> >> > start for dlm:2 on lnx0107a238: not installed (5)/
>> >> > /.../
>> >> > /Dec  1 14:32:49 lnx0223a pengine[9824]:   notice: Start  
>> >> > dlm:3#011(remote-lnx0106a)/
>> >> > /Dec  1 14:32:49 lnx0223a crmd[9826]:   notice: Initiating monitor
>> >> > operation dlm_monitor_0 locally on remote-lnx0106a/
>> >> > /Dec  1 14:32:50 lnx0223a crmd[9826]:error: Result of probe
>> >> > operation for dlm on remot

[ClusterLabs] Antwort: Re: Antwort: Re: clone resource - pacemaker remote

2016-12-07 Thread philipp . achmueller
> Von: Ken Gaillot 
> An: philipp.achmuel...@arz.at, Cluster Labs - All topics related to 
> open-source clustering welcomed 
> Datum: 05.12.2016 17:38
> Betreff: Re: Antwort: Re: [ClusterLabs] clone resource - pacemaker 
remote
> 
> On 12/05/2016 09:20 AM, philipp.achmuel...@arz.at wrote:
> > Ken Gaillot  schrieb am 02.12.2016 19:27:09:
> > 
> >> Von: Ken Gaillot 
> >> An: users@clusterlabs.org
> >> Datum: 02.12.2016 19:32
> >> Betreff: Re: [ClusterLabs] clone resource - pacemaker remote
> >>
> >> On 12/02/2016 07:08 AM, philipp.achmuel...@arz.at wrote:
> >> > hi,
> >> >
> >> > what is best way to prevent clone resource trying to run on 
remote/guest
> >> > nodes?
> >>
> >> location constraints with a negative score:
> >>
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> >> 
Pacemaker_Explained/index.html#_deciding_which_nodes_a_resource_can_run_on
> >>
> >>
> >> you can even use a single constraint with a rule based on #kind ne
> >> cluster, so you don't need a separate constraint for each node:
> >>
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> >> Pacemaker_Explained/index.html#_node_attribute_expressions
> >>
> >>
> >> alternatively, you can set symmetric-cluster=false and use positive
> >> constraints for cluster nodes only
> >>
> > 
> > set constraint to single primitive, group, or on clone resource?
> > are there any advantages/disadvantages using one of these methods?
> 
> When a resource is cloned, you want to refer to the clone name in any
> constraints, rather than the primitive name.
> 
> For a group, it doesn't really matter, but it's simplest to use the
> group name in constraints -- mainly that keeps you from accidentally
> setting conflicting constraints on different members of the group. And
> of course group members are automatically ordered/colocated with each
> other, so you don't need individual constraints for that.
> 

set location constraint to group didn't work:

ERROR: error: unpack_location_tags: Constraint 'location-base-group': 
Invalid reference to 'base-group'

for clone it works like expected. 
but crm_mon is showing "disallowed" set as "stopped". is this "works as 
designed" or how to prevent this?

crm configure show
...
location location-base-clone base-clone resource-discovery=never \
   rule -inf: #kind ne cluster
...

crm_mon -r
 Clone Set: base-clone [base-group]
 Started: [ lnx0223a lnx0223b ]
 Stopped: [ vm-lnx0106a vm-lnx0107a ]

> > 
> >> >
> >> > ...
> >> > node 167873318: lnx0223a \
> >> > attributes maintenance=off
> >> > node 167873319: lnx0223b \
> >> > attributes maintenance=off
> >> > ...
> >> > /primitive vm-lnx0107a VirtualDomain \/
> >> > /params hypervisor="qemu:///system"
> >> > config="/etc/kvm/lnx0107a.xml" \/
> >> > /meta remote-node=lnx0107a238 \/
> >> > /utilization cpu=1 hv_memory=4096/
> >> > /primitive remote-lnx0106a ocf:pacemaker:remote \/
> >> > /params server=xx.xx.xx.xx \/
> >> > /meta target-role=Started/
> >> > /group base-group dlm clvm vg1/
> >> > /clone base-clone base-group \/
> >> > /meta interleave=true target-role=Started/
> >> > /.../
> >> >
> >> > /Dec  1 14:32:57 lnx0223a crmd[9826]:   notice: Initiating start
> >> > operation dlm_start_0 on lnx0107a238/
> >> > /Dec  1 14:32:58 lnx0107a pacemaker_remoted[1492]:   notice: 
executing -
> >> > rsc:dlm action:start call_id:7/
> >> > /Dec  1 14:32:58 lnx0107a pacemaker_remoted[1492]:   notice: 
finished -
> >> > rsc:dlm action:start call_id:7  exit-code:5 exec-time:16ms
> > queue-time:0ms/
> >> > /Dec  1 14:32:58 lnx0223b crmd[9328]:error: Result of start
> >> > operation for dlm on lnx0107a238: Not installed/
> >> > /Dec  1 14:32:58 lnx0223a crmd[9826]:  warning: Action 31 
(dlm_start_0)
> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/
> >> > /Dec  1 14:32:58 lnx0223a crmd[9826]:  warning: Action 31 
(dlm_start_0)
> >> > on lnx0107a238 failed (target: 0 vs. rc: 5): Error/
> >> > /Dec  1 14:34:07 lnx0223a pengine[9824]:  warning: Processing 
failed op
> >> > start for dlm:2 on lnx0107a238: not installed (5)/
> >> > /Dec  1 14:34:07 lnx0223a pengine[9824]:  warning: Processing 
failed op
> >> > start for dlm:2 on lnx0107a238: not installed (5)/
> >> > /.../
> >> > /Dec  1 14:32:49 lnx0223a pengine[9824]:   notice: Start 
> >> > dlm:3#011(remote-lnx0106a)/
> >> > /Dec  1 14:32:49 lnx0223a crmd[9826]:   notice: Initiating monitor
> >> > operation dlm_monitor_0 locally on remote-lnx0106a/
> >> > /Dec  1 14:32:50 lnx0223a crmd[9826]:error: Result of probe
> >> > operation for dlm on remote-lnx0106a: Not installed/
> >> > /Dec  1 14:32:50 lnx0223a crmd[9826]:  warning: Action 5 
(dlm_monitor_0)
> >> > on remote-lnx0106a failed (target: 7 vs. rc: 5): Error/
> >> > /Dec  1 14:32:50 lnx0223a crmd[9826]:  warning: Action 5 
(dlm_monitor_0)
> >> > on remote-lnx0106a failed (target: 7 vs. rc: 5): Error/
> >> > /.../
> >> >
> >> > ---
> >> > env: pacemak