Re: [ClusterLabs] resource management of standby node

2020-12-09 Thread Ken Gaillot
On Wed, 2020-12-09 at 10:57 +0800, Roger Zhou wrote:
> On 12/1/20 4:03 PM, Ulrich Windl wrote:
> > > > > Ken Gaillot  schrieb am 30.11.2020 um
> > > > > 19:52 in Nachricht
> > 
> > :
> > 
> > ...
> > > 
> > > Though there's nothing wrong with putting all nodes in standby.
> > > Another
> > > alternative would be to set the stop-all-resources cluster
> > > property.
> > 
> > Hi Ken,
> > 
> > thanks for the valuable feedback!
> > 
> > I was looking for that, but unfortunately crm shell cannot set that
> > from the resource (or node) context; only from the configure
> > context.
> > I don't know what a good syntax would be "resource stop all" /
> > "resource start all" or "resource stop-all" / "resource unstop-all"
> > (the asymmetry is that after a "stop all" you cannot start a singly
> > resource (I guess), but you'll have to use "start-all" (which, in
> > turn, does not start resources that have a stopped role (I guess).
> > 
> > So maybe "resource set stop-all" / "resource unset stop-all" /
> > "resource clear stop-all"
> > 
> 
> 1.
> Well, let `crm resource stop|start all` change the cluster property
> of 
> `stop-all-resources` might contaminate the syntax at the resources
> level alone. 
> To avoid that, the user interface need be more carefully to deliver
> the proper 
> information at the first place about the internals at some degree to
> avoid the 
> potential misunderstanding or questions.
> 
> 2.
> On the other hand, people might naturally read `crm resource stop
> all` as 
> changing all resources `target-role=Stopped`. Well, technically this
> seems a 
> bit awkward, but no obvious benefit comparing to stop-all-resources.
> And, 
> pacemaker developers could comment more internals around this.

You can set target-role to Stopped in resource defaults, and the
difference compared to stop-all-resources is that stop-all-resources
would take precedence over any target-role set on a specific resource,
while the resource default would not.

Setting target-role to Stopped on every resource individually would be
the same as setting stop-all-resources, just more painful. :)

> 3.
> `resource set|unset` add more commands under `resource` and will
> confuse some 
> users and should be avoided in my view.
> 
> I feel more discussion is expected, though my gut feeling approach 1
> is a 
> better one.
> 
> Anyway, good topic indeed. Feedback from more users would be useful
> to shape 
> the better UI/UX. I can imagine some people may have idea to suggest
> "--all" 
> even, btw.
> 
> Thanks,
> Roger
> 
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 
-- 
Ken Gaillot 

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] resource management of standby node

2020-12-08 Thread Roger Zhou



On 12/1/20 4:03 PM, Ulrich Windl wrote:

Ken Gaillot  schrieb am 30.11.2020 um 19:52 in Nachricht

:

...


Though there's nothing wrong with putting all nodes in standby. Another
alternative would be to set the stop-all-resources cluster property.


Hi Ken,

thanks for the valuable feedback!

I was looking for that, but unfortunately crm shell cannot set that from the 
resource (or node) context; only from the configure context.
I don't know what a good syntax would be "resource stop all" / "resource start all" or 
"resource stop-all" / "resource unstop-all"
(the asymmetry is that after a "stop all" you cannot start a singly resource (I guess), 
but you'll have to use "start-all" (which, in turn, does not start resources that have a 
stopped role (I guess).

So maybe "resource set stop-all" / "resource unset stop-all" / "resource clear 
stop-all"



1.
Well, let `crm resource stop|start all` change the cluster property of 
`stop-all-resources` might contaminate the syntax at the resources level alone. 
To avoid that, the user interface need be more carefully to deliver the proper 
information at the first place about the internals at some degree to avoid the 
potential misunderstanding or questions.


2.
On the other hand, people might naturally read `crm resource stop all` as 
changing all resources `target-role=Stopped`. Well, technically this seems a 
bit awkward, but no obvious benefit comparing to stop-all-resources. And, 
pacemaker developers could comment more internals around this.


3.
`resource set|unset` add more commands under `resource` and will confuse some 
users and should be avoided in my view.


I feel more discussion is expected, though my gut feeling approach 1 is a 
better one.


Anyway, good topic indeed. Feedback from more users would be useful to shape 
the better UI/UX. I can imagine some people may have idea to suggest "--all" 
even, btw.


Thanks,
Roger

___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


Re: [ClusterLabs] resource management of standby node

2020-11-30 Thread Andrei Borzenkov
On Mon, Nov 30, 2020 at 3:11 PM Ulrich Windl
 wrote:
>
> Hi!
>
> In SLES15 I'm surprised what a standby node does: My guess was that a standby 
> node would stop all resources and then just "shut up", but it seems it still 
> tried to place resources and calls monitor operations.
>

Standby nodes are ineligible for running resources. It does not stop
pacemaker from trying to place resources somewhere in cluster.

> Like this after a configuration change:
> pacemaker-controld[49413]:  notice: Result of probe operation for 
> prm_test_raid_md1 on h18: not running
>

Probe is not monitor. Normally it happens once when the pacemaker is
started. It should not really be affected by putting node in standby.

> Or this (on the DC node):
> pacemaker-schedulerd[69599]:  notice: Cannot pair prm_test_raid_md1:0 with 
> instance of cln_DLM
>

So? As mentioned, pacemaker still attempts to manage resources, it
just excludes standby nodes from the list of possible candidates. If
all nodes are in standby mode, no resource can run anywhere, but
pacemaker still needs to try placing resources to see it. Maybe you
really want cluster maintenance mode instead.

> Maybe I should have done differently, but after a test setup I noticed that I 
> named by primitives in a non-consistent way, and wanted to mass-rename 
> resources.
> As from the past renaming running resources had issues, I wanted to stop all 
> resources before changing the configuration.
> So I was expecting the cluster to be silent until I put at least one node 
> online again.
>
> Expectation failed. Is there a better way to do it?
>
> Regards,
> Ulrich
>
>
> ___
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
___
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/