16.05.2018 06:52, Casey & Gina пишет:
> Hi, I'm trying to figure out how to get fencing/stonith going with
> pacemaker.
>
> As far as I understand it, they are both part of the same thing -
> setting up stonith means setting up fencing. If I'm mistaken on
> that, please let me know.
>
They are
Hi, I'm trying to figure out how to get fencing/stonith going with pacemaker.
As far as I understand it, they are both part of the same thing - setting up
stonith means setting up fencing. If I'm mistaken on that, please let me know.
Specifically, I'm wanting to use the external/vcenter plugin.
Source code for the fourth (and likely final) release candidate for
Pacemaker version 2.0.0 is now available at:
https://github.com/ClusterLabs/pacemaker/releases/tag/Pacemaker-2.0.0-rc4
This release restores the possibility of rolling (live) upgrades from
Pacemaker 1.1.11 or later, on top of cor
On Tue, 2018-05-15 at 13:25 +0300, George Melikov wrote:
> Hello,
>
> Sorry for a (likely) dumb question,
> but is there a way to store and sync data via pacemaker/corosync?
>
> Are there any way to store key/value properties or files?
>
> I've found `pcs property set --force`, but it didn't su
Thanks, I should have seen that. I just assumed that everything was working
fine because `pcs status` shows no errors.
This leads me to another question - is there a way to trigger a rebuild of a
slave with pcs? Or do I need to use `pcs cluster stop`, then manually do a new
pg_basebackup, cop
Hello,
Sorry for a (likely) dumb question,
but is there a way to store and sync data via pacemaker/corosync?
Are there any way to store key/value properties or files?
I've found `pcs property set --force`, but it didn't survive cluster restart.
Sincerely,
G
On Mon, 14 May 2018 19:08:47 +
"Shobe, Casey" wrote:
> > We do not trigger error for such scenario because it would require the
> > cluster to react...and there's really no way the cluster can solve such
> > issue. So we just put a negative score, which is already quite strange to
> > be noti
Sorry, my mistake. I should use the second id. It is ok now. Thanks Tomas.
-邮件原件-
发件人: 范国腾
发送时间: 2018年5月15日 16:19
收件人: users@clusterlabs.org
主题: 答复: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"
It could not find the id of constraint set.
[root@node1 ~]# pcs constr
It could not find the id of constraint set.
[root@node1 ~]# pcs constraint colocation --full
Colocation Constraints:
clvmd-clone with dlm-clone (score:INFINITY)
(id:colocation-clvmd-clone-dlm-clone-INFINITY)
pgsql-master-ip with pgsql-ha (score:INFINITY) (rsc-role:Started)
(with-rsc-role:Mas
Dne 15.5.2018 v 10:02 范国腾 napsal(a):
Thank you, Tomas. I know how to remove a constraint " pcs constraint colocation remove
". Is there a command to delete a
constraint colocation set?
There is "pcs constraint remove ". To get a constraint
id, run "pcs constraint colocation --full" and fin
Thank you, Tomas. I know how to remove a constraint " pcs constraint colocation
remove ". Is there a command to
delete a constraint colocation set?
-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 15:42
收件人: users@clusterlabs.org
主题: Re: [Clust
Dne 15.5.2018 v 05:25 范国腾 napsal(a):
Hi,
We have two VIP resources and we use the following command to make them in
different node.
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 setoptions
score=-1000
Now we add a new node into the cluster and we add a new VIP too. We want th
12 matches
Mail list logo