Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-09 Thread Jehan-Guillaume de Rorthais
On Fri, 9 Mar 2018 00:54:00 +
范国腾  wrote:

> Thanks Rorthais, Got it. The following command could make sure that it move
> to the master if there is no standby alive:
> 
> pcs constraint colocation add pgsql-ip-stby1 with slave pgsql-ha 100
> pcs constraint colocation add pgsql-ip-stby1 with pgsql-ha 50

Exact.
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-08 Thread Jehan-Guillaume de Rorthais
On Thu, 8 Mar 2018 01:45:43 +
范国腾 <fanguot...@highgo.com> wrote:

> Sorry, Rorthais, I have thought that the link and the attachment was the same
> document yesterday.

No problem.

For your information, I merged the draft in the official documentation
yesterday.

> I just read the attachment and that is exactly what I ask
> originally.

Excellent! Glad it could helped.

> I have two questions on the following two command:
> # pcs constraint colocation add pgsql-ip-stby1 with slave pgsql-ha 10
> Q: Does the score 10 means that " move to the master if there is no standby
> alive "?

Kind of. It actually says nothing about moving to the master. It just says
the slaves IP should prefers to locate with a slave. If slaves nodes are down
or in standby, the IP "can" move to the master as nothing forbid it.

In fact, while writing this sentence, I realize there's nothing to push the
slaves IP on the master if other nodes are up, but the pgsql-ha slaves are
stopped or banned. The configuration I provided is incomplete.

1. I added the missing constraints in the doc online
2. notice I raised all the scores so they are higher than the stickiness

See:
https://clusterlabs.github.io/PAF/CentOS-7-admin-cookbook.html#adding-ips-on-slaves-nodes

Sorry for this :/

> # pcs constraint order start pgsql-ha then start pgsql-ip-stby1 kind=Mandatory
> Q: I did not set the order and I did not find the issue until now. So I add
> this constraint? What will happen if I miss it?

The IP address can start before PostgreSQL is up on the node. You will have
client connexions being rejected with error "PostgreSQL is not listening on
host [...]".

> Here is what I did now:
> pcs resource create pgsql-slave-ip1 ocf:heartbeat:IPaddr2 ip=192.168.199.186
>   nic=enp3s0f0 cidr_netmask=24 op monitor interval=10s; 
> pcs resource create pgsql-slave-ip2 ocf:heartbeat:IPaddr2 ip=192.168.199.187 
>   nic=enp3s0f0 cidr_netmask=24 op monitor interval=10s; 
> pcs constraint colocation add pgsql-slave-ip1 with pgsql-ha 

It misses the score and the role. Without role specification, it can colocates
with Master or Slave with no preference.

> pcs constraint colocation add pgsql-slave-ip2 with pgsql-ha 

Same, it misses the score and the role.

> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
>   pgsql-master-ip setoptions score=-1000

The score seems too high in  my opinion, compared to other ones.

You should probably remove all the colocation constraints and try with the one
I pushed online.

Regards,

> -邮件原件-
> 发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com] 
> 发送时间: 2018年3月7日 16:29
> 收件人: 范国腾 <fanguot...@highgo.com>
> 抄送: Cluster Labs - All topics related to open-source clustering welcomed
> <users@clusterlabs.org> 主题: Re: [ClusterLabs] 答复: 答复: 答复: How to
> configure to make each slave resource has one VIP
> 
> On Wed, 7 Mar 2018 01:27:16 +
> 范国腾 <fanguot...@highgo.com> wrote:
> 
> > Thank you, Rorthais,
> > 
> > I read the link and it is very helpful.  
> 
> Did you read the draft I attached to the email? It was the main purpose of my
> answer: helping you with IP on slaves. It seems to me your mail is reporting
> different issues than the original subject.
> 
> > There are some issues that I have met when I installed the cluster.  
> 
> I suppose this is another subject and we should open a new thread with the
> appropriate subject.
> 
> > 1. “pcs cluster stop” could not stop the cluster in some times.  
> 
> You would have to give some more details about the context where "pcs cluster
> stop" timed out.
> 
> > 2. when I upgrade the PAF, I could just replace the pgsqlms file. When 
> > I upgrade the postgres, I just replace the /usr/local/pgsql/.  
> 
> I believe both actions are documented with best practices in this links I
> gave you.
> 
> > 3.  If the cluster does not stop normally, the pgcontroldata status is 
> > not "SHUTDOWN",then the PAF would not start the postgresql any more, 
> > so I normally change the pgsqlms as below after installing the PAF.
> > [...]  
> 
> This should be discussed to understand the exact context before considering
> your patch.
> 
> At a first glance, your patch seems quite dangerous as it bypass the sanity
> checks.
> 
> Please, could you start a new thread with proper subject and add extensive
> informations about this issue? You could open a new issue on PAF repository
> as well: https://github.com/ClusterLabs/PAF/issues
> 
> Regards,



-- 
Jehan-Guillaume de Rorthais
Dalibo
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-07 Thread Jehan-Guillaume de Rorthais
On Wed, 7 Mar 2018 01:27:16 +
范国腾  wrote:

> Thank you, Rorthais,
> 
> I read the link and it is very helpful.

Did you read the draft I attached to the email? It was the main purpose of my
answer: helping you with IP on slaves. It seems to me your mail is reporting
different issues than the original subject.

> There are some issues that I have met when I installed the cluster.

I suppose this is another subject and we should open a new thread with
the appropriate subject.

> 1. “pcs cluster stop” could not stop the cluster in some times.

You would have to give some more details about the context where "pcs cluster
stop" timed out.

> 2. when I upgrade the PAF, I could just replace the pgsqlms file. When I
> upgrade the postgres, I just replace the /usr/local/pgsql/.

I believe both actions are documented with best practices in this links I gave
you.

> 3.  If the cluster does not stop normally, the pgcontroldata status is not
> "SHUTDOWN",then the PAF would not start the postgresql any more, so I
> normally change the pgsqlms as below after installing the PAF.
> [...]

This should be discussed to understand the exact context before considering
your patch.

At a first glance, your patch seems quite dangerous as it bypass the sanity
checks.

Please, could you start a new thread with proper subject and add extensive
informations about this issue? You could open a new issue on PAF repository as
well: https://github.com/ClusterLabs/PAF/issues

Regards,
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread 范国腾
Thank you, Ken. Got it :)

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot
发送时间: 2018年3月6日 7:18
收件人: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has 
one VIP

On Sun, 2018-02-25 at 02:24 +, 范国腾 wrote:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not work.
> 
> Do we have any way to make all of the slave VIPs binds to the master 
> node if there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 
> Thanks

Hi,

If you colocate all the slave IPs "with pgsql-ha" instead of "with slave 
pgsql-ha", then they can run on either master or slave nodes.

Including the master IP in the anti-colocation set will keep them apart 
normally.

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource 
> has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion 
> > and it works.
> > 
> > There is a question: If there are too much nodes (e.g.  total 10 
> > slave nodes ), I need run "pcs constraint colocation add pgsql- 
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a 
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many 
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource 
> > has one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource 
> > > has a separate VIP. For example, I have 3 slave nodes and my VIP 
> > > group has
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 
> > > fails, the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a 
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip 
> > resources to run on different nodes so you cannot put them into a 
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha 
> > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the 

Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread Ken Gaillot
On Sat, 2018-02-24 at 03:02 +, 范国腾 wrote:
> Thank you, Ken,
> 
> So I could use the following command: pcs constraint colocation set
> pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-
> 1000

Correct

(sorry for the late reply)

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot
> 发送时间: 2018年2月23日 23:14
> 收件人: Cluster Labs - All topics related to open-source clustering
> welcomed <users@clusterlabs.org>
> 主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave
> resource has one VIP
> 
> On Fri, 2018-02-23 at 12:45 +, 范国腾 wrote:
> > Thank you very much, Tomas.
> > This resolves my problem.
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:37
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> > resource 
> > has one VIP
> > 
> > Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > > Tomas,
> > > 
> > > Thank you very much. I do the change according to your
> > > suggestion 
> > > and it works.
> 
> One thing to keep in mind: a score of -INFINITY means the IPs will
> *never* run on the same node, even if one or more nodes go down. If
> that's what you want, of course, that's good. If you want the IPs to
> stay on different nodes normally, but be able to run on the same node
> in case of node outage, use a finite negative score.
> 
> > > 
> > > There is a question: If there are too much nodes (e.g.  total 10 
> > > slave nodes ), I need run "pcs constraint colocation add pgsql- 
> > > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a 
> > > simple command to do this?
> > 
> > I think colocation set does the trick:
> > pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> > pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many 
> > resources as you need in this command.
> > 
> > Tomas
> > 
> > > 
> > > Master/Slave Set: pgsql-ha [pgsqld]
> > >   Masters: [ node1 ]
> > >   Slaves: [ node2 node3 ]
> > >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > > node1
> > >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > > node3
> > >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > > node2
> > > 
> > > Thanks
> > > Steven
> > > 
> > > -邮件原件-
> > > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas
> > > Jelinek
> > > 发送时间: 2018年2月23日 17:02
> > > 收件人: users@clusterlabs.org
> > > 主题: Re: [ClusterLabs] How to configure to make each slave
> > > resource 
> > > has one VIP
> > > 
> > > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > > Hi,
> > > > 
> > > > Our system manages the database (one master and multiple
> > > > slave).
> > > > We
> > > > use one VIP for multiple Slave resources firstly.
> > > > 
> > > > Now I want to change the configuration that each slave
> > > > resource 
> > > > has a separate VIP. For example, I have 3 slave nodes and my
> > > > VIP 
> > > > group has
> > > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 
> > > > fails, the vip could move to the node3.
> > > > 
> > > > 
> > > > I use the following command to add the VIP
> > > > 
> > > > /      pcs resource group add pgsql-slave-group pgsql-slave-
> > > > ip1 
> > > > pgsql-slave-ip2/
> > > > 
> > > > /      pcs constraint colocation add pgsql-slave-group with
> > > > slave 
> > > > pgsql-ha INFINITY/
> > > > 
> > > > But now the two VIPs are the same nodes:
> > > > 
> > > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > > 
> > > > / Masters: [ node1 ]/
> > > > 
> > > > / Slaves: [ node2 node3 ]/
> > > > 
> > > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):  
> > > > Started 
> > > > node1/
> > > > 
> > > > /Resource Group: pgsql-slave-group/
> > > > 
> > > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):  
> > > > Started
> > > > node2/*
> > > > 
> > > > */ pgsql-slave-ip2    (o

[ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-02-23 Thread 范国腾
Thank you, Ken,

So I could use the following command: pcs constraint colocation set 
pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-1000


-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot
发送时间: 2018年2月23日 23:14
收件人: Cluster Labs - All topics related to open-source clustering welcomed 
<users@clusterlabs.org>
主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has 
one VIP

On Fri, 2018-02-23 at 12:45 +, 范国腾 wrote:
> Thank you very much, Tomas.
> This resolves my problem.
> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource 
> has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion 
> > and it works.

One thing to keep in mind: a score of -INFINITY means the IPs will
*never* run on the same node, even if one or more nodes go down. If that's what 
you want, of course, that's good. If you want the IPs to stay on different 
nodes normally, but be able to run on the same node in case of node outage, use 
a finite negative score.

> > 
> > There is a question: If there are too much nodes (e.g.  total 10 
> > slave nodes ), I need run "pcs constraint colocation add pgsql- 
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a 
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many 
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource 
> > has one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource 
> > > has a separate VIP. For example, I have 3 slave nodes and my VIP 
> > > group has
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 
> > > fails, the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a 
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip 
> > resources to run on different nodes so you cannot put them into a 
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha 
> > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the same
> > node:
> > pcs constrain