welcomed
<users@clusterlabs.org>; 范国腾 <fanguot...@highgo.com>
主题: Re: [ClusterLabs] The slave not does not promote to master
On 05/07/2018 07:39 AM, 范国腾 wrote:
Hi,
We have two nodes cluster using PAF to manage the postgres. Node2 is master.
Master/Slave Set: pgsql-ha [pgsqld]
M
Hi,
We have two VIP resources and we use the following command to make them in
different node.
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 setoptions
score=-1000
Now we add a new node into the cluster and we add a new VIP too. We want the
constraint colocation set to change
org
主题: Re: [ClusterLabs] How to change the "pcs constraint colocation set"
Dne 15.5.2018 v 05:25 范国腾 napsal(a):
> Hi,
>
> We have two VIP resources and we use the following command to make them in
> different node.
>
> pcs constraint colocation set pgsql-slave-ip1 pgs
Hi,
When I run the "pcs cluster stop --all", it will hang and there is no any
response sometimes. The log is as below. Could we find the reason why it hangs
from the log and how to make the cluster stop right now?
[root@node2 pg_log]# pcs status
Cluster name: hgpurog
Stack: corosync
Current
...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年5月15日 16:12
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"
Dne 15.5.2018 v 10:02 范国腾 napsal(a):
> Thank you, Tomas. I know how to remove a constraint " pcs constraint
&
Sorry, my mistake. I should use the second id. It is ok now. Thanks Tomas.
-邮件原件-
发件人: 范国腾
发送时间: 2018年5月15日 16:19
收件人: users@clusterlabs.org
主题: 答复: [ClusterLabs] 答复: How to change the "pcs constraint colocation set"
It could not find the id of constraint set.
[root@nod
Hi,
The cluster has three nodes: one is master and two are slave. Now we run “pcs
cluster stop --all” to stop all of the nodes. Then we run “pcs cluster start”
in the master node. We find it not able to started. The cause is that the
stonith resource could not be started so all of the other
月2日 12:20
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] Could not start only one node in pacemaker
02.05.2018 05:52, 范国腾 пишет:
> Hi,
> The cluster has three nodes: one is master and two are slave. Now we run “pcs
> cluster stop --all” to stop all of the nodes. Then we run “pcs cluster start”
-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月26日 15:07
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>; 李梦怡 <limen...@highgo.com>
主题: Re: [ClusterLabs] the P
te the "pcs cleanup" command every time?
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月25日 18:39
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>; 李梦怡 <lim
(enp0s3), not the
heartbeat network card(enp0s8)?
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月26日 16:02
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>
Ulrich,
Thank you very much for the help. When we do the performance test, our
application(pgsql-ha) will start more than 500 process to process the client
request. Is it possible to make this issue?
Is it any workaround or method to make pacemaker not restart the resource in
such situation?
rg>
主题: Re: [ClusterLabs] pacemaker reports monitor timeout while CPU is high
On Wed, 2018-01-10 at 09:40 +, 范国腾 wrote:
> Hello,
>
> This issue only appears when we run performance test and the CPU is
> high. The cluster and log is as below. The Pacemaker will restart the
Hello,
This issue only appears when we run performance test and the CPU is high. The
cluster and log is as below. The Pacemaker will restart the Slave Side pgsql-ha
resource about every two minutes.
Take the following scenario for example:(when the pgsqlms RA is called, we
print the log
Hello,
The help of "pcs --debug" says " Print all network traffic and external
commands run." But when I run the "pcs --debug", it still print the help
information. How to trigger it to print the network traffic?
Thanks
Steven
[root@db3 ~]# pcs --debug
Usage: pcs [-f file] [-h] [commands]...
[ClusterLabs] 答复: pacemaker reports monitor timeout while CPU is high
On Thu, 2018-01-11 at 03:50 +, 范国腾 wrote:
> Thank you, Ken.
>
> We have set the timeout to be 10 seconds, but it reports timeout only
> after 2 seconds. So it seems not work if I set higher timeouts.
> Our a
Hello,
I setup the pacemaker cluster using virtualbox. There are three nodes. The OS
is centos7, the /dev/sdb is the shared storage(three nodes use the same disk
file).
(1) At first, I create the stonith using this command:
pcs stonith create scsi-stonith-device fence_scsi
, Andrei Borzenkov wrote:
> On Thu, Feb 8, 2018 at 5:51 AM, 范国腾 <fanguot...@highgo.com> wrote:
>> Hello,
>>
>> I setup the pacemaker cluster using virtualbox. There are three nodes. The
>> OS is centos7, the /dev/sdb is the shared storage(three nodes use the same
>
succeed.
m,
On Fri, Feb 9, 2018 at 6:33 AM, 范国腾
<fanguot...@highgo.com<mailto:fanguot...@highgo.com>> wrote:
Thank Klaus,
The information is very helpful. I try to study the fence_vbox and the
fence_sdb.
In our test lab, we use ipmi as the stonith. But I want to setup a simu
related to open-source clustering welcomed
<users@clusterlabs.org>
主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has
one VIP
On Fri, 2018-02-23 at 12:45 +0000, 范国腾 wrote:
> Thank you very much, Tomas.
> This resolves my problem.
>
> -邮件原件
Hi,
Our system manages the database (one master and multiple slave). We use one VIP
for multiple Slave resources firstly.
Now I want to change the configuration that each slave resource has a separate
VIP. For example, I have 3 slave nodes and my VIP group has 2 vip; The 2 vips
binds to node1
: [origin software="rsyslogd" swVersion="7.4.7"
x-pid="627" x-info="http://www.rsyslog.com;] start
Feb 26 03:59:12 db1 rsyslogd-2027: imjournal: fscanf on state file
`/var/lib/rsyslog/imjournal.state' failed
[try http://www.rsyslog.com/e/2027 ]
发件人: 范国腾
发送
at:IPaddr2): Started node2
Thanks
Steven
-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年2月23日 17:02
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] How to configure to make each slave resource has one VIP
Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> Hi,
&g
Thank you very much, Tomas.
This resolves my problem.
-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年2月23日 17:37
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has one
VIP
Dne 23.2.2018 v 10:16 范国腾
-source clustering welcomed
<users@clusterlabs.org>
主题: Re: [ClusterLabs] Pacemaker Master restarts when Slave is added to the
cluster
Usual suspect - interleave=false on clone resource.
On Wed, Dec 27, 2017 at 10:49 AM, 范国腾 <fanguot...@highgo.com> wrote:
> Hello,
>
>
>
>
: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
发送时间: 2018年2月23日 17:37
收件人: users@clusterlabs.org
主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has one
VIP
Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> Tomas,
>
> Thank you very much. I do the change accordin
urn $OCF_NOT_RUNNING;### add this line
}
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年3月6日 17:08
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>
主题: R
colocation add pgsql-slave-ip1 with pgsql-ha
pcs constraint colocation add pgsql-slave-ip2 with pgsql-ha
pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2 pgsql-master-ip
setoptions score=-1000
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年3月7日 16:2
Hello,
There are three nodes in our cluster (redhat7). When we run "reboot" in one
node, the "pcs status" show the node status is offline and the resource status
is Stopped. That is fine. But when we power off the node directly, the node
status is " UNCLEAN (offline)" and the resource status
clustering welcomed
<users@clusterlabs.org>
抄送: 李晓飞 <lixiao...@highgo.com>; 祁华鹏 <qihuap...@highgo.com>
主题: Re: [ClusterLabs] The node and resource status is defferent when the node
poweroff
On Thu, Mar 15, 2018 at 10:42 AM, 范国腾 <fanguot...@highgo.com> wrote:
> Hello,
>
Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年3月8日 17:41
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>
主题: Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource
has one VIP
On Thu, 8
Hi,
I am using PAF too. You could read the
/usr/lib/ocf/resource.d/heartbeat/pgsqlms file to find what pgsql command is
called.
For example, pacemaker start ->pg_ctl start, pacemaker monitor->pg_isready.
Thanks
Steven
-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Casey &
Hello,
We use the following command to create the cluster. Node2 is always the master
when the cluster starts. Why does pacemaker not select node1 as the default
master?
How to configure if we want node1 to be the default master?
pcs cluster setup --name cluster_pgsql node1 node2
pcs resource
Thank you, Rorthais. I see now.
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月13日 17:17
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>
主题: Re: [ClusterLa
I have meet the similar issue when the postgres is not stopped normally.
You could run pg_controldata to check if your postgres status is
shutdown/shutdown in recovery.
I change the /usr/lib/ocf/resource.d/heartbeat/pgsqlms to avoid this problem:
elsif ( $pgisready_rc == 2 ) {
# The instance
Hi,
Our lab has two resource: (1) PAF (master/slave)(2) VIP (bind to the master
PAF node). The configuration is in the attachment.
Each node has two network card: One(enp0s8) is for the pacemaker heartbeat in
internal network, the other(enp0s3) is for the master VIP in the external
meout=60s op promote timeout=300s op demote timeout=120s op monitor
interval=10s timeout=100s role="Master" op monitor interval=16s timeout=100s
role="Slave" op notify timeout=60s
pcs resource master pgsql-ha pgsqld notify=true interleave=true
-邮件原件-
发件人: 范国腾
发送时间
figuration.
[cid:image003.jpg@01D3D700.2F3E24D0]
But it does not happen in the following configuration. Why is the behaviors
different?
[cid:image004.jpg@01D3D700.2F3E24D0]
-邮件原件-
发件人: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月17日 17:47
收件人: 范国腾 <fanguot...@highgo.com
Hi,
We install a new lab which only have the postgres resource and the vip
resource. After the cluster is installed, the status is ok: only node is master
and the other is slave. Then I run "pcs cluster stop --all" to close the
cluster and then I run the "pcs cluster start --all" to start the
Hello,
I want to setup a cluster in two nodes. One is master and the other is slave. I
don’t need the fencing device because my internal network is stable. I use the
following command to create the resource, but all of the two nodes are slave
and cluster don’t promote it to master. Could you
org
主题: Re: [ClusterLabs] How to setup a simple master/slave cluster in two nodes
without stonith resource
03.04.2018 05:07, 范国腾 пишет:
> Hello,
>
> I want to setup a cluster in two nodes. One is master and the other is slave.
> I don’t need the fencing device because my internal net
uillaume de Rorthais [mailto:j...@dalibo.com]
发送时间: 2018年4月3日 21:02
收件人: 范国腾 <fanguot...@highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed
<users@clusterlabs.org>
主题: Re: [ClusterLabs] How to setup a simple master/slave cluster in two nodes
without stonith
slave resource has
one VIP
On Sun, 2018-02-25 at 02:24 +0000, 范国腾 wrote:
> Hello,
>
> If all of the slave nodes crash, all of the slave vips could not work.
>
> Do we have any way to make all of the slave VIPs binds to the master
> node if there is no slave nodes in the syst
Hi,
We use the PAF (https://dalibo.github.io/PAF/?) to manage the postgresql.
According to user's requirement, we could not use trust mode in the
pg_hba.conf? file. So when running psql, it will ask us to input the password
and we have to input the password manually.
So the pcs status show
44 matches
Mail list logo