[ClusterLabs] 答复: 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-06 Thread 范国腾
Thank you, Rorthais,

I read the link and it is very helpful.

There are some issues that I have met when I installed the cluster.
1. “pcs cluster stop” could not stop the cluster in some times.
2. when I upgrade the PAF, I could just replace the pgsqlms file. When I 
upgrade the postgres, I just replace the /usr/local/pgsql/.
3.  If the cluster does not stop normally, the pgcontroldata status is not 
"SHUTDOWN",then the PAF would not start the postgresql any more, so I normally 
change the pgsqlms as below after installing the PAF.

elsif ( $pgisready_rc == 2 ) {
# The instance is not listening.
# We check the process status using pg_ctl status and check
# if it was propertly shut down using pg_controldata.
ocf_log( 'debug', 'pgsql_monitor: instance "%s" is not listening',
$OCF_RESOURCE_INSTANCE );
return _confirm_stopped();  ### remove this line
return $OCF_NOT_RUNNING;### add this line 
}


-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com] 
发送时间: 2018年3月6日 17:08
收件人: 范国腾 
抄送: Cluster Labs - All topics related to open-source clustering welcomed 

主题: Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource 
has one VIP

Hi guys,

Few month ago, I started a new chapter about this exact subject for "PAF - 
Cluster administration under CentOS" ( see:
https://clusterlabs.github.io/PAF/CentOS-7-admin-cookbook.html)

Please, find attach my draft.

All feedback, fix, comments and intensive tests are welcome!

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] CephFS virtual IP

2018-03-06 Thread Oscar Segarra
Hi Ken,

Thanks a lot for your quick response. As you guess, the interesting part is
the port-check.

As I'm planning to work with Ceph cluster, I think there is any resource
monitor for Ceph Monitor implemented yet.

Regarding your suggestions:

*If for whatever reason, you can't put the service under cluster control*
That would be perfect for me! But I don't know how to do it... :(

*(1) write a dummy agent whose monitor action checks the port, and colocate
the IP with that; *
How can I do that, is there any tutorial?

*(2) have a script check the port and set a node attribute appropriately,
and use a  rule-based location constraint for the IP (this approach would
be useful mainly if you already have some script doing a check).*
How can I do that, is there any tutorial?

Sorry for my simple questions, I'm a basic pacemaker/corosync user, and I
don't have experience developing new resource agents.

Thanks a lot!


2018-03-06 16:05 GMT+01:00 Ken Gaillot :

> On Tue, 2018-03-06 at 10:11 +0100, Oscar Segarra wrote:
> > Hi,
> >
> > I'd like to recover this post in order to know if there is any way to
> > achieve this kind of simple HA system.
> >
> > Thanks a lot.
> >
> > 2017-08-28 4:10 GMT+02:00 Oscar Segarra :
> > > Hi,
> > >
> > > In Ceph, by design there is no single point of failure I  terms of
> > > server roles, nevertheless, from the client point of view, it might
> > > exist.
> > >
> > > In my environment:
> > > Mon1: 192.168.100.101:6789
> > > Mon2: 192.168.100.102:6789
> > > Mon3: 192.168.100.103:6789
> > >
> > > Client: 192.168.100.104
> > >
> > > I have created a line in /etc/festa referencing Mon but, of course,
> > > if Mon1 fails, the mount point gets stuck.
> > >
> > > I'd like to create a vip assigned to any host with tcp port 6789 UP
> > > and, in the client, mount the CephFS using that VIP.
> > >
> > > Is there any way to achieve this?
> > >
> > > Thanks a lot in advance!
>
> The IP itself would be a standard floating IP address using the
> ocf:heartbeat:IPaddr2 resource agent. "Clusters from Scratch" has an
> example, though I'm sure you're familiar with that:
>
> http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_
> from_Scratch/_add_a_resource.html
>
> The interesting part is making sure that port 6789 is responding. The
> usual design in these cases is to put the service that provides that
> port under cluster control; its monitor action would ensure the port is
> responding, and a standard colocation constraint would ensure the IP
> can only run when the service is up.
>
> If for whatever reason, you can't put the service under cluster
> control, I see two approaches: (1) write a dummy agent whose monitor
> action checks the port, and colocate the IP with that; or (2) have a
> script check the port and set a node attribute appropriately, and use a
> rule-based location constraint for the IP (this approach would be
> useful mainly if you already have some script doing a check).
> --
> Ken Gaillot 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] CephFS virtual IP

2018-03-06 Thread Ken Gaillot
On Tue, 2018-03-06 at 10:11 +0100, Oscar Segarra wrote:
> Hi,
> 
> I'd like to recover this post in order to know if there is any way to
> achieve this kind of simple HA system.
> 
> Thanks a lot.
> 
> 2017-08-28 4:10 GMT+02:00 Oscar Segarra :
> > Hi,
> > 
> > In Ceph, by design there is no single point of failure I  terms of
> > server roles, nevertheless, from the client point of view, it might
> > exist.
> > 
> > In my environment:
> > Mon1: 192.168.100.101:6789
> > Mon2: 192.168.100.102:6789
> > Mon3: 192.168.100.103:6789
> > 
> > Client: 192.168.100.104
> > 
> > I have created a line in /etc/festa referencing Mon but, of course,
> > if Mon1 fails, the mount point gets stuck. 
> > 
> > I'd like to create a vip assigned to any host with tcp port 6789 UP
> > and, in the client, mount the CephFS using that VIP.
> > 
> > Is there any way to achieve this? 
> > 
> > Thanks a lot in advance! 

The IP itself would be a standard floating IP address using the
ocf:heartbeat:IPaddr2 resource agent. "Clusters from Scratch" has an
example, though I'm sure you're familiar with that:

http://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/1.1/html/Clusters_
from_Scratch/_add_a_resource.html

The interesting part is making sure that port 6789 is responding. The
usual design in these cases is to put the service that provides that
port under cluster control; its monitor action would ensure the port is
responding, and a standard colocation constraint would ensure the IP
can only run when the service is up.

If for whatever reason, you can't put the service under cluster
control, I see two approaches: (1) write a dummy agent whose monitor
action checks the port, and colocate the IP with that; or (2) have a
script check the port and set a node attribute appropriately, and use a
rule-based location constraint for the IP (this approach would be
useful mainly if you already have some script doing a check).
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] CephFS virtual IP

2018-03-06 Thread Oscar Segarra
Hi,

I'd like to recover this post in order to know if there is any way to
achieve this kind of simple HA system.

Thanks a lot.

2017-08-28 4:10 GMT+02:00 Oscar Segarra :

> Hi,
>
> In Ceph, by design there is no single point of failure I  terms of server
> roles, nevertheless, from the client point of view, it might exist.
>
> In my environment:
> Mon1: 192.168.100.101:6789
> Mon2: 192.168.100.102:6789
> Mon3: 192.168.100.103:6789
>
> Client: 192.168.100.104
>
> I have created a line in /etc/festa referencing Mon but, of course, if
> Mon1 fails, the mount point gets stuck.
>
> I'd like to create a vip assigned to any host with tcp port 6789 UP and,
> in the client, mount the CephFS using that VIP.
>
> Is there any way to achieve this?
>
> Thanks a lot in advance!
>
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-06 Thread Jehan-Guillaume de Rorthais
Hi guys,

Few month ago, I started a new chapter about this exact subject for "PAF -
Cluster administration under CentOS" ( see:
https://clusterlabs.github.io/PAF/CentOS-7-admin-cookbook.html)

Please, find attach my draft.

All feedback, fix, comments and intensive tests are welcome!

## Adding IPs on slaves nodes

In this chapter, we are using a three node cluster with one PostgreSQL master
instance and two standbys instances.

As usual, we start from the cluster created in the quick start documentation:
* one master resource called `pgsql-ha`
* an IP address called `pgsql-master-ip` linked to the `pgsql-ha` master role

See the [Quick Start CentOS 7]({{ site.baseurl}}/Quick_Start-CentOS-7.html#cluster-resources) 
for more informations.

We want to create two IP addresses with the following properties:
* start on a standby node
* avoid to start on the same standby node than the other one
* move to the available standby node should a failure occurs to the other one
* move to the master if there is no standby alive

To make this possible, we have to play with the resources co-location scores.

First, let's add two `IPaddr2` resources called `pgsql-ip-stby1` and
`pgsql-ip-stby2` holding IP addresses `192.168.122.49` and `192.168.122.48`:

~~~
# pcs resource create pgsql-ip-stby1 ocf:heartbeat:IPaddr2  \
  cidr_netmask=24 ip=192.168.122.49 op monitor interval=10s \

# pcs resource create pgsql-ip-stby2 ocf:heartbeat:IPaddr2  \
  cidr_netmask=24 ip=192.168.122.48 op monitor interval=10s \
~~~

We want both IP addresses to avoid co-locating with each other. We add
a co-location constraint so `pgsql-ip-stby2` avoids `pgsql-ip-stby1` with a
score of `-5`:

~~~
# pcs constraint colocation add pgsql-ip-stby2 with pgsql-ip-stby1 -5
~~~

> **NOTE**: that means the cluster manager have to start `pgsql-ip-stby1` first
> to decide where `pgsql-ip-stby2` should start according to the new scores in
> the cluster. Also, that means that whenever you move `pgsql-ip-stby1` to
> another node, the cluster might have to stop `pgsql-ip-stby2` first and
> restart it elsewhere depending on new scores. 
{: .notice}

Now, we add similar co-location constraints to define that each IP address
prefers to run on a node with a slave of `pgsql-ha`:

~~~
# pcs constraint colocation add pgsql-ip-stby1 with slave pgsql-ha 10
# pcs constraint order start pgsql-ha then start pgsql-ip-stby1 kind=Mandatory

# pcs constraint colocation add pgsql-ip-stby2 with slave pgsql-ha 10
# pcs constraint order start pgsql-ha then start pgsql-ip-stby2 kind=Mandatory
~~~
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org