Re: [ClusterLabs] [Pacemaker on raspberry pi]

2018-03-05 Thread Ivan Devát


Hi,


*#pcs cluster node add pi05 --start --enable**
*Disabling SBD service...
pi05: sbd disabled
Traceback (most recent call last):
   File "/usr/sbin/pcs", line 11, in 
     load_entry_point('pcs==0.9.160', 'console_scripts', 'pcs')()
   File "/usr/lib/python3.6/site-packages/pcs/app.py", line 190, in main
     cmd_map[command](argv)
   File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 218, in 
cluster_cmd

     cluster_node(argv)
   File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 1674, in 
cluster_node

     node_add(lib_env, node0, node1, modifiers)
   File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 1857, in 
node_add

     allow_incomplete_distribution=modifiers["skip_offline_nodes"]
   File 
"/usr/lib/python3.6/site-packages/pcs/lib/commands/remote_node.py", line 
58, in _share_authkey

     node_communication_format.pcmk_authkey_file(authkey_content),
   File 
"/usr/lib/python3.6/site-packages/pcs/lib/node_communication_format.py", 
line 47, in pcmk_authkey_file

     "pacemaker_remote authkey": pcmk_authkey_format(authkey_content)
   File 
"/usr/lib/python3.6/site-packages/pcs/lib/node_communication_format.py", 
line 29, in pcmk_authkey_format

     "data": base64.b64encode(authkey_content).decode("utf-8"),
   File "/usr/lib/python3.6/base64.py", line 58, in b64encode
     encoded = binascii.b2a_base64(s, newline=False)
TypeError: a bytes-like object is required, not 'str'

it seems there is a TypeError



this problem has been fixed in pcs-0.9.163-2.fc27. This package is in 
testing and soon it will be in stable (days to stable 1).


Ivan
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread 范国腾
Thank you, Ken. Got it :)

-邮件原件-
发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot
发送时间: 2018年3月6日 7:18
收件人: Cluster Labs - All topics related to open-source clustering welcomed 

主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has 
one VIP

On Sun, 2018-02-25 at 02:24 +, 范国腾 wrote:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not work.
> 
> Do we have any way to make all of the slave VIPs binds to the master 
> node if there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 
> Thanks

Hi,

If you colocate all the slave IPs "with pgsql-ha" instead of "with slave 
pgsql-ha", then they can run on either master or slave nodes.

Including the master IP in the anti-colocation set will keep them apart 
normally.

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource 
> has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion 
> > and it works.
> > 
> > There is a question: If there are too much nodes (e.g.  total 10 
> > slave nodes ), I need run "pcs constraint colocation add pgsql- 
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a 
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many 
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource 
> > has one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource 
> > > has a separate VIP. For example, I have 3 slave nodes and my VIP 
> > > group has
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 
> > > fails, the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a 
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip 
> > resources to run on different nodes so you cannot put them into a 
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha 
> > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the same
> > node:
> > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> > -INFINITY
> > 
> > 
> > Regards,
> > Tomas
> > 
> > > 
> > > Thanks
> > > 
> > > 
> > > 
> > > ___
> > > Users mailing list: Users@clusterlabs.org 
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > 
> > > Project Home: http://www.clusterlabs.org Getting started:
> > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > > 
> > 
> > ___
> > Users mailing list: Users@clusterlabs.org 
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org Getting started: 
> > 

Re: [ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread Ken Gaillot
On Sun, 2018-02-25 at 02:24 +, 范国腾 wrote:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not
> work. 
> 
> Do we have any way to make all of the slave VIPs binds to the master
> node if there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 
> Thanks

Hi,

If you colocate all the slave IPs "with pgsql-ha" instead of "with
slave pgsql-ha", then they can run on either master or slave nodes.

Including the master IP in the anti-colocation set will keep them apart
normally.

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users@clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> resource has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > Tomas,
> > 
> > Thank you very much. I do the change according to your suggestion
> > and it works.
> > 
> > There is a question: If there are too much nodes (e.g.  total 10
> > slave nodes ), I need run "pcs constraint colocation add pgsql-
> > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a
> > simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many
> resources as you need in this command.
> 
> Tomas
> 
> > 
> > Master/Slave Set: pgsql-ha [pgsqld]
> >   Masters: [ node1 ]
> >   Slaves: [ node2 node3 ]
> >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > node1
> >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > node3
> >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > node2
> > 
> > Thanks
> > Steven
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:02
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] How to configure to make each slave resource
> > has 
> > one VIP
> > 
> > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > Hi,
> > > 
> > > Our system manages the database (one master and multiple slave).
> > > We 
> > > use one VIP for multiple Slave resources firstly.
> > > 
> > > Now I want to change the configuration that each slave resource
> > > has a 
> > > separate VIP. For example, I have 3 slave nodes and my VIP group
> > > has 
> > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2
> > > fails, 
> > > the vip could move to the node3.
> > > 
> > > 
> > > I use the following command to add the VIP
> > > 
> > > /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
> > > pgsql-slave-ip2/
> > > 
> > > /      pcs constraint colocation add pgsql-slave-group with
> > > slave 
> > > pgsql-ha INFINITY/
> > > 
> > > But now the two VIPs are the same nodes:
> > > 
> > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > 
> > > / Masters: [ node1 ]/
> > > 
> > > / Slaves: [ node2 node3 ]/
> > > 
> > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):   Started 
> > > node1/
> > > 
> > > /Resource Group: pgsql-slave-group/
> > > 
> > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):   Started
> > > node2/*
> > > 
> > > Could anyone tell how to configure to make each slave node has a
> > > VIP?
> > 
> > Resources in a group always run on the same node. You want the ip
> > resources to run on different nodes so you cannot put them into a
> > group.
> > 
> > This will take the resources out of the group:
> > pcs resource ungroup pgsql-slave-group
> > 
> > Then you can set colocation constraints for them:
> > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha
> > pcs 
> > constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > 
> > You may also need to tell pacemaker not to put both ips on the same
> > node:
> > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
> > -INFINITY
> > 
> > 
> > Regards,
> > Tomas
> > 
> > > 
> > > Thanks
> > > 
> > > 
> > > 
> > > ___
> > > Users mailing list: Users@clusterlabs.org 
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > 
> > > Project Home: http://www.clusterlabs.org Getting started:
> > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > > 
> > 
> > ___
> > Users mailing list: Users@clusterlabs.org 
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org Getting started: 
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> > ___
> > Users mailing list: Users@clusterlabs.org 
> > https://lists.clusterlabs.org/mailman/listinfo/users
> > 
> > Project Home: http://www.clusterlabs.org Getting 

Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP

2018-03-05 Thread Ken Gaillot
On Sat, 2018-02-24 at 03:02 +, 范国腾 wrote:
> Thank you, Ken,
> 
> So I could use the following command: pcs constraint colocation set
> pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-
> 1000

Correct

(sorry for the late reply)

> 
> -邮件原件-
> 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Ken Gaillot
> 发送时间: 2018年2月23日 23:14
> 收件人: Cluster Labs - All topics related to open-source clustering
> welcomed 
> 主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave
> resource has one VIP
> 
> On Fri, 2018-02-23 at 12:45 +, 范国腾 wrote:
> > Thank you very much, Tomas.
> > This resolves my problem.
> > 
> > -邮件原件-
> > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:37
> > 收件人: users@clusterlabs.org
> > 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> > resource 
> > has one VIP
> > 
> > Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > > Tomas,
> > > 
> > > Thank you very much. I do the change according to your
> > > suggestion 
> > > and it works.
> 
> One thing to keep in mind: a score of -INFINITY means the IPs will
> *never* run on the same node, even if one or more nodes go down. If
> that's what you want, of course, that's good. If you want the IPs to
> stay on different nodes normally, but be able to run on the same node
> in case of node outage, use a finite negative score.
> 
> > > 
> > > There is a question: If there are too much nodes (e.g.  total 10 
> > > slave nodes ), I need run "pcs constraint colocation add pgsql- 
> > > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a 
> > > simple command to do this?
> > 
> > I think colocation set does the trick:
> > pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> > pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many 
> > resources as you need in this command.
> > 
> > Tomas
> > 
> > > 
> > > Master/Slave Set: pgsql-ha [pgsqld]
> > >   Masters: [ node1 ]
> > >   Slaves: [ node2 node3 ]
> > >   pgsql-master-ip(ocf::heartbeat:IPaddr2):   Started
> > > node1
> > >   pgsql-slave-ip1(ocf::heartbeat:IPaddr2):   Started
> > > node3
> > >   pgsql-slave-ip2(ocf::heartbeat:IPaddr2):   Started
> > > node2
> > > 
> > > Thanks
> > > Steven
> > > 
> > > -邮件原件-
> > > 发件人: Users [mailto:users-boun...@clusterlabs.org] 代表 Tomas
> > > Jelinek
> > > 发送时间: 2018年2月23日 17:02
> > > 收件人: users@clusterlabs.org
> > > 主题: Re: [ClusterLabs] How to configure to make each slave
> > > resource 
> > > has one VIP
> > > 
> > > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > > Hi,
> > > > 
> > > > Our system manages the database (one master and multiple
> > > > slave).
> > > > We
> > > > use one VIP for multiple Slave resources firstly.
> > > > 
> > > > Now I want to change the configuration that each slave
> > > > resource 
> > > > has a separate VIP. For example, I have 3 slave nodes and my
> > > > VIP 
> > > > group has
> > > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2 
> > > > fails, the vip could move to the node3.
> > > > 
> > > > 
> > > > I use the following command to add the VIP
> > > > 
> > > > /      pcs resource group add pgsql-slave-group pgsql-slave-
> > > > ip1 
> > > > pgsql-slave-ip2/
> > > > 
> > > > /      pcs constraint colocation add pgsql-slave-group with
> > > > slave 
> > > > pgsql-ha INFINITY/
> > > > 
> > > > But now the two VIPs are the same nodes:
> > > > 
> > > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > > 
> > > > / Masters: [ node1 ]/
> > > > 
> > > > / Slaves: [ node2 node3 ]/
> > > > 
> > > > /pgsql-master-ip    (ocf::heartbeat:IPaddr2):  
> > > > Started 
> > > > node1/
> > > > 
> > > > /Resource Group: pgsql-slave-group/
> > > > 
> > > > */ pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):  
> > > > Started
> > > > node2/*
> > > > 
> > > > */ pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):  
> > > > Started
> > > > node2/*
> > > > 
> > > > Could anyone tell how to configure to make each slave node has
> > > > a 
> > > > VIP?
> > > 
> > > Resources in a group always run on the same node. You want the
> > > ip 
> > > resources to run on different nodes so you cannot put them into
> > > a 
> > > group.
> > > 
> > > This will take the resources out of the group:
> > > pcs resource ungroup pgsql-slave-group
> > > 
> > > Then you can set colocation constraints for them:
> > > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-
> > > ha 
> > > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > > 
> > > You may also need to tell pacemaker not to put both ips on the
> > > same
> > > node:
> > > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-
> > > ip2 
> > > -INFINITY
> > > 
> > > 
> > > Regards,
> > > Tomas
> > > 
> > > > 
> > > > Thanks
> > > > 
> > > > 
> > > > 
> > > > ___
> > > > Users mailing list: 

Re: [ClusterLabs] copy file

2018-03-05 Thread Ken Gaillot
On Mon, 2018-03-05 at 15:09 +0100, Mevo Govo wrote:
> Hi,
> I am new in pacemaker. I think, I should use DRBD instead of copy
> file. But in this case, I would copy a file from a DRBD to an
> external device. Is there a builtin way to copy a file before a
> resource is started (and after the DRBD is promoted)? For example a
> "copy" resource? I did not find it. 
> Thanks: lados.
> 

There's no stock way of doing that, but you could easily write an agent
that simply copies a file. You could use ocf:pacemaker:Dummy as a
template, and add the copy to the start action. You can use standard
ordering and colocation constraints to make sure everything happens in
the right sequence.

I don't know what capabilities your external device has, but another
approach would be to an NFS server to share the DRBD file system, and
mount it from the device, if you want direct access to the original
file rather than a copy.
-- 
Ken Gaillot 
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] [Pacemaker on raspberry pi]

2018-03-05 Thread pfrenard

Hello guys,


I have discovered pacemaker since a few days and I m trying to setup a 
Raspberry PI 3 cluster :)


I am running Fedora 27 - armv7l release.

pacemaker is 1.1.18

corosync is 2.4.3

1/ I can create a cluster with 2 nodes let's say pi01 and pi02 with no 
issue.


Then I  want to add a third node with command "pcs cluster node add pi05"


*#pcs cluster setup --name hapi --enable pi01 pi02*
Destroying cluster on nodes: pi01, pi02...
pi02: Stopping Cluster (pacemaker)...
pi01: Stopping Cluster (pacemaker)...
pi01: Successfully destroyed cluster
pi02: Successfully destroyed cluster

Sending 'pacemaker_remote authkey' to 'pi01', 'pi02'
pi01: successful distribution of the file 'pacemaker_remote authkey'
pi02: successful distribution of the file 'pacemaker_remote authkey'
Sending cluster config files to the nodes...
pi01: Succeeded
pi02: Succeeded
pi01: Cluster Enabled
pi02: Cluster Enabled

Synchronizing pcsd certificates on nodes pi01, pi02...
pi01: Success
pi02: Success
Restarting pcsd on the nodes in order to reload the certificates...
pi01: Success
pi02: Success


*#pcs cluster start --all*
pi02: Starting Cluster...
pi01: Starting Cluster...

*#pcs status*

Cluster name: hapi
WARNING: no stonith devices and stonith-enabled is not false
Stack: corosync
Current DC: pi02 (version 1.1.18-2.fc27-2b07d5c5a9) - partition with quorum
Last updated: Mon Mar  5 21:51:43 2018
Last change: Mon Mar  5 21:51:19 2018 by hacluster via crmd on pi02

2 nodes configured
0 resources configured

Online: [ pi01 pi02 ]

No resources


Daemon Status:
  corosync: active/enabled
  pacemaker: active/enabled
  pcsd: active/enabled

*#pcs cluster auth pi01 pi02 pi05*
pi01: Already authorized
pi02: Already authorized
pi05: Already authorized

*#pcs cluster node add pi05 --start --enable**
*Disabling SBD service...
pi05: sbd disabled
Traceback (most recent call last):
  File "/usr/sbin/pcs", line 11, in 
    load_entry_point('pcs==0.9.160', 'console_scripts', 'pcs')()
  File "/usr/lib/python3.6/site-packages/pcs/app.py", line 190, in main
    cmd_map[command](argv)
  File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 218, in 
cluster_cmd

    cluster_node(argv)
  File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 1674, in 
cluster_node

    node_add(lib_env, node0, node1, modifiers)
  File "/usr/lib/python3.6/site-packages/pcs/cluster.py", line 1857, in 
node_add

    allow_incomplete_distribution=modifiers["skip_offline_nodes"]
  File 
"/usr/lib/python3.6/site-packages/pcs/lib/commands/remote_node.py", line 
58, in _share_authkey

    node_communication_format.pcmk_authkey_file(authkey_content),
  File 
"/usr/lib/python3.6/site-packages/pcs/lib/node_communication_format.py", 
line 47, in pcmk_authkey_file

    "pacemaker_remote authkey": pcmk_authkey_format(authkey_content)
  File 
"/usr/lib/python3.6/site-packages/pcs/lib/node_communication_format.py", 
line 29, in pcmk_authkey_format

    "data": base64.b64encode(authkey_content).decode("utf-8"),
  File "/usr/lib/python3.6/base64.py", line 58, in b64encode
    encoded = binascii.b2a_base64(s, newline=False)
TypeError: a bytes-like object is required, not 'str'

it seems there is a TypeError

let's try with --debug


*#pcs cluster node add pi05 --start --enable --debug**
*Running: /usr/bin/ruby -I/usr/lib/pcsd/ /usr/lib/pcsd/pcsd-cli.rb 
read_tokens

Environment:
  DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus
  DISPLAY=localhost:11.0
  EDITOR=vi
  GEM_HOME=/usr/lib/pcsd/vendor/bundle/ruby
  HISTCONTROL=ignoredups
  HISTSIZE=1000
  HOME=/root
  HOSTNAME=pi01
  LANG=en_US.UTF-8
  LC_ALL=C
  LD_LIBRARY_PATH=/usr/lib:/usr/local/lib:/lib
  LESSOPEN=||/usr/bin/lesspipe.sh %s
  LOGNAME=root

[ClusterLabs] copy file

2018-03-05 Thread Mevo Govo
Hi,
I am new in pacemaker. I think, I should use DRBD instead of copy file. But
in this case, I would copy a file from a DRBD to an external device. Is
there a builtin way to copy a file before a resource is started (and after
the DRBD is promoted)? For example a "copy" resource? I did not find it.
Thanks: lados.
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[ClusterLabs] resource-agents v4.1.1

2018-03-05 Thread Oyvind Albrigtsen

ClusterLabs is happy to announce resource-agents v4.1.1.
Source code is available at:
https://github.com/ClusterLabs/resource-agents/releases/tag/v4.1.1

The most significant enhancements in this release are:
- new resource agents:
 - azure-lb
 - jira
 - lxd-info/machine-info
 - mariadb (for MariaDB master/slave replication setup with GTID)
 - mpathpersist

- bugfixes and enhancements:
 - VirtualDomain: properly migrate VMs on node shutdown (bsc#1074014)
 - mpathpersist: fixed issue with reservation key parsing in status()
 - pgsql: create stats temp directory if it doesnt exist
 - pgsql: improved validation for replication mode
 - CTDB: add new possible location for CTDB_SYSCONFIG
 - CTDB: cope with deprecated "idmap backend" smb.conf option
 - CTDB: fix initial probe
 - Filesystem: add support for cvfs
 - IPsrcaddr: match exact route to avoid failing
 - IPsrcaddr: only check for ifconfig on BSD/Solaris
 - Raid1: ignore transient devices after stopping a device
 - awseip/awsvip: improvements (incl multi NIC support)
 - crm_*: use new parameter names
 - db2: improve monitor and simplify STANDBY/.../DISCONNECTED
 - lvmlockd: change lvm.conf to use lvmlockd
 - ocf-shellfuncs: fix fallback name for ocf_attribute_target()
 - oracle: fix alter user syntax for set_mon_user_profile
 - oracle: log warning when using using sysdba instead of "monuser"
 - redis: add support for tunneling replication traffic
 - syslog-ng: fix to make commercial version supported as well
 - tomcat: fix invalid stop option

The full list of changes for resource-agents is available at:
https://github.com/ClusterLabs/resource-agents/blob/v4.1.1/ChangeLog

Everyone is encouraged to download and test the new release candidate.
We do many regression tests and simulations, but we can't cover all
possible use cases, so your feedback is important and appreciated.

Many thanks to all the contributors to this release.


Best,
The resource-agents maintainers
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] why some resources blocked

2018-03-05 Thread Mevo Govo
Thanks, it works. I replaced the "promote" to "start" in the constraint,
and ora* resources start.
Also thanks for resource grouping advice.
lados.


2018-03-02 16:26 GMT+01:00 Ken Gaillot :

> On Fri, 2018-03-02 at 11:53 +0100, Mevo Govo wrote:
> >
> > Hi,
> >
> > I am new in pacemaker, corosync and on this list.
> > I created a cluster (based on "Clusters from Scratch" doc). The DRBD
> > and my filesystem on it work fine. Then I add an oracle and oralsnr
> > resource. But these oracle resources remain stopped. I can start and
> > stop the ora resources well by "pcs resource debug-start". (I can
> > login into database, lsnrctl status ...). Could you help me, why do
> > not these resources start automatically? I do not see errors in
> > "/var/log/cluster/corosync.log", only this:
> >
> > Mar  1 10:44:40 xetest1 crmd[12440]: warning: Input I_ELECTION_DC
> > received in state S_INTEGRATION from do_election_check
> > Mar  1 10:44:40 xetest1 pengine[12439]:  notice: On loss of CCM
> > Quorum: Ignore
> > Mar  1 10:44:40 xetest1 pengine[12439]:  notice: Start
> > fs_drbd1#011(xetest1)
> > Mar  1 10:44:40 xetest1 pengine[12439]:  notice: Start
> > ora_listener#011(xetest1 - blocked)
> > Mar  1 10:44:40 xetest1 pengine[12439]:  notice: Start
> > ora_db_xe#011(xetest1 - blocked)
> >
> >
> > [root@xetest1 /]# pcs status
> > Cluster name: cluster_xetest
> > Stack: corosync
> > Current DC: xetest2 (version 1.1.16-12.el7-94ff4df) - partition with
> > quorum
> > Last updated: Fri Mar  2 10:03:04 2018
> > Last change: Fri Mar  2 10:02:48 2018 by root via cibadmin on xetest1
> >
> > 2 nodes configured
> > 5 resources configured
> >
> > Online: [ xetest1 xetest2 ]
> >
> > Full list of resources:
> >
> >  Master/Slave Set: drbd1_sync [drbd1]
> >  Masters: [ xetest1 ]
> >  Slaves: [ xetest2 ]
> >  fs_drbd1   (ocf::heartbeat:Filesystem):Started xetest1
> >  ora_listener   (ocf::heartbeat:oralsnr):   Stopped
> >  ora_db_xe  (ocf::heartbeat:oracle):Stopped
> >
> > Daemon Status:
> >   corosync: active/disabled
> >   pacemaker: active/disabled
> >   pcsd: active/disabled
> > [root@xetest1 /]#
> >
> > # I created oracle resources by these commands (OCFMON user also
> > created successful during debug-start)
> >
> > pcs -f clust_ora_cfg_tmp resource create ora_listener
> > ocf:heartbeat:oralsnr \
> >   sid="XE" \
> >   home="/u01/app/oracle/product/11.2.0/xe" \
> >   user="oracle" \
> >   listener="LISTENER" \
> >   op monitor interval=30s
> >
> > pcs -f clust_ora_cfg_tmp constraint colocation add ora_listener with
> > fs_drbd1 INFINITY
> > pcs -f clust_ora_cfg_tmp constraint order promote fs_drbd1 then start
> > ora_listener
>
> ^^^ fs_drbd1 is not a master/slave resource, so it can't be promoted
>
> I'm guessing you want to colocate fs_drbd1 with the master role of
> drbd1_sync (and order it after the promote of that).
>
> If you want ora_listener and then ora_db_exe to start in order after
> that, I'd group fs_drbd1, ora_listener, and ora_db_exe, then
> colocate/order the group with the master role of drbd1_sync.
>
> >
> > pcs -f clust_ora_cfg_tmp resource create ora_db_xe
> > ocf:heartbeat:oracle \
> >   sid="XE" \
> >   home="/u01/app/oracle/product/11.2.0/xe" \
> >   user="oracle" \
> >   monuser="OCFMON" \
> >   monpassword="**" \
> >   shutdown_method="immediate" \
> >   op monitor interval=30s
> >
> > pcs -f clust_ora_cfg_tmp constraint colocation add ora_db_xe with
> > ora_listener INFINITY
> > pcs -f clust_ora_cfg_tmp constraint order promote ora_listener then
> > start ora_db_xe
> >
> > pcs -f clust_ora_cfg_tmp constraint
> > pcs -f clust_ora_cfg_tmp resource show
> >
> > pcs cluster cib-push clust_ora_cfg_tmp
> > pcs status
> >
> > Thanks: lados.
> >
> --
> Ken Gaillot 
> ___
> Users mailing list: Users@clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] Does CMAN Still Not Support Multipe CoroSync Rings?

2018-03-05 Thread Jan Friesse

Eric,



Well, I finally got around to trying out the  tag and I managed to get 
two rings running, but the first ring is not obeying my address and port directives.

Here's my cluster.conf


   
   
 
   
   
   
 
   
 
   
 
 
   
   
   
 
   
 
   
 
   
   
   
 
   
   
 
 
   


The rings are up...

[root@ha10b ~]# corosync-cfgtool -s
Printing ring status.
Local node ID 2
RING ID 0
 id  = 192.168.10.61
 status  = ring 0 active with no faults
RING ID 1
 id  = 198.51.100.61
 status  = ring 1 active with no faults

HOWEVER, when I run tcpdump, I can see that ring2 running on the appropriate 
multicast address and port, but ring1 is running on the default address and 
port...

[root@ha10b ~]# tcpdump -nn -i bond0 net 239.192.0.0/16
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on bond0, link-type EN10MB (Ethernet), capture size 65535 bytes
22:54:36.738395 IP 192.168.10.60.5404 > 239.192.170.111.5405: UDP, length 119
22:54:40.547048 IP 192.168.10.60.5404 > 239.192.170.111.5405: UDP, length 119

How do I get ring1 running on my desired address and port of 239.255.5.1, port 
4000


I'm not sure "mcast" for every node is really needed.

Try last example of:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/cluster_administration/s1-config-rrp-cli-ca



   
   


Honza



--Eric

___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org



___
Users mailing list: Users@clusterlabs.org
https://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org