Re: [Pacemaker] votequorum for 2 node cluster

2014-06-11 Thread Jacek Konieczny
On 06/11/14 16:35, Kostiantyn Ponomarenko wrote: And that is like roulette, in case we lose the lowest nodeid we lose all. So I can lose only the node which doesn't have the lowest nodeid? And it's not useful in 2 node cluster. Am i correct? It may be usefull. If you define roles of the

Re: [Pacemaker] Howto check if the current node is active?

2014-01-07 Thread Jacek Konieczny
On 2014-01-07 13:33, Bauer, Stefan (IZLBW Extern) wrote: How can i check if the current node i’m connected to is the active? It should be parseable because i want to use it in a script. Pacemaker is not limited to Active-Passive setups, in fact it has no notion of 'Active' node – every node

Re: [Pacemaker] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-26 Thread Jacek Konieczny
On Wed, 26 Jun 2013 14:35:03 +1000 Andrew Beekhof and...@beekhof.net wrote: Urgh: infoJun 25 13:40:10 lrmd_ipc_connect(913):0: Connecting to lrmd trace Jun 25 13:40:10 pick_ipc_buffer(670):0: Using max message size of 51200 error Jun 25 13:40:10 qb_sys_mmap_file_open(92):2147483648:

Re: [Pacemaker] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-26 Thread Jacek Konieczny
On Wed, 26 Jun 2013 18:38:37 +1000 Andrew Beekhof and...@beekhof.net wrote: trace Jun 25 13:40:10 gio_read_socket(366):0: 0xa6c140.4 1 (ref=1) trace Jun 25 13:40:10 lrmd_ipc_accept(89):0: Connection 0xa6d110 infoJun 25 13:40:10 crm_client_new(276):0: Connecting 0xa6d110 for uid=17

Re: [Pacemaker] [corosync] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-25 Thread Jacek Konieczny
On Tue, 25 Jun 2013 10:10:13 +1000 Andrew Beekhof and...@beekhof.net wrote: On 24/06/2013, at 9:31 PM, Jacek Konieczny jaj...@jajcus.net wrote: After I have upgraded Pacemaker from 1.1.8 to 1.1.9 on a node I get the following errors in my syslog and Pacemaker doesn't seem to be able

Re: [Pacemaker] [corosync] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-25 Thread Jacek Konieczny
On Tue, 25 Jun 2013 16:43:54 +1000 Andrew Beekhof and...@beekhof.net wrote: Ok, I was just checking Pacemaker was built for the running version of libqb. Yes it was. corosync 2.2.0 and libqb 0.14.0 both on the build system and on the cluster systems. Hmm… I forgot libqb is a separate

Re: [Pacemaker] [corosync] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-25 Thread Jacek Konieczny
On Tue, 25 Jun 2013 08:59:19 +0200 Jacek Konieczny jaj...@jajcus.net wrote: On Tue, 25 Jun 2013 16:43:54 +1000 Andrew Beekhof and...@beekhof.net wrote: Ok, I was just checking Pacemaker was built for the running version of libqb. Yes it was. corosync 2.2.0 and libqb 0.14.0 both

Re: [Pacemaker] [corosync] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-25 Thread Jacek Konieczny
On Tue, 25 Jun 2013 10:50:14 +0300 Vladislav Bogdanov bub...@hoster-ok.com wrote: I would recommend qb 1.4.4. 1.4.3 had at least one nasty bug which affects pacemaker. Just tried that. It didn't help. Jacek ___ Pacemaker mailing list:

Re: [Pacemaker] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-25 Thread Jacek Konieczny
On Tue, 25 Jun 2013 20:24:00 +1000 Andrew Beekhof and...@beekhof.net wrote: On 25/06/2013, at 5:56 PM, Jacek Konieczny jaj...@jajcus.net wrote: On Tue, 25 Jun 2013 10:50:14 +0300 Vladislav Bogdanov bub...@hoster-ok.com wrote: I would recommend qb 1.4.4. 1.4.3 had at least one nasty bug

[Pacemaker] pacemaker/corosync: error: qb_sys_mmap_file_open: couldn't open file

2013-06-24 Thread Jacek Konieczny
After I have upgraded Pacemaker from 1.1.8 to 1.1.9 on a node I get the following errors in my syslog and Pacemaker doesn't seem to be able to start services on this node. Jun 24 13:19:44 dev1n2 crmd[5994]:error: qb_sys_mmap_file_open: couldn't open file

Re: [Pacemaker] Two resource nodes + one quorum node

2013-06-14 Thread Jacek Konieczny
On Thu, 13 Jun 2013 15:50:26 +0400 Andrey Groshev gre...@yandex.ru wrote: 11.06.2013, 22:52, Michael Schwartzkopff mi...@clusterbau.com: Am Dienstag, 11. Juni 2013, 22:33:32 schrieb Andrey Groshev: Hi, I want to make Postgres cluster. As far as I understand, for the proper

Re: [Pacemaker] issues when installing on pxe booted environment

2013-03-29 Thread Jacek Konieczny
On Fri, 29 Mar 2013 11:37:37 +1100 Andrew Beekhof and...@beekhof.net wrote: On Thu, Mar 28, 2013 at 10:43 PM, Rainer Brestan rainer.bres...@gmx.net wrote: Hi John, to get Corosync/Pacemaker running during anaconda installation, i have created a configuration RPM package which does a few

Re: [Pacemaker] stonith and avoiding split brain in two nodes cluster

2013-03-25 Thread Jacek Konieczny
On Mon, 25 Mar 2013 13:54:22 +0100 My problem is how to avoid split brain situation with this configuration, without configuring a 3rd node. I have read about quorum disks, external/sbd stonith plugin and other references, but I'm too confused with all this. For example, [1]

Re: [Pacemaker] stonith and avoiding split brain in two nodes cluster

2013-03-25 Thread Jacek Konieczny
On Mon, 25 Mar 2013 20:01:28 +0100 Angel L. Mateo ama...@um.es wrote: quorum { provider: corosync_votequorum expected_votes: 2 two_node: 1 } Corosync will then manage quorum for the two-node cluster and Pacemaker I'm using corosync 1.1 which is the one provided with

Re: [Pacemaker] Does LVM resouce agent conflict with UDEV rules?

2013-03-07 Thread Jacek Konieczny
On Wed, 06 Mar 2013 22:41:51 +0100 Sven Arnold sven.arn...@localite.de wrote: In fact, disabling the udev rule SUBSYSTEM==block, ACTION==add|change, ENV{ID_FS_TYPE}==lvm*|LVM*,\ RUN+=watershed sh -c '/sbin/lvm vgscan; /sbin/lvm vgchange -a y' seems to resolve the problem for me. This

[Pacemaker] Accessing CIB by user not 'root' and not 'hacluster'

2013-01-25 Thread Jacek Konieczny
Hi, It used to be possible to access the Pacemaker's CIB from any user in the 'haclient' group, but after one of the upgrades it stopped working (I didn't care about this issue match then, so I cannot recall the exact point). Now I would like to restore the cluster state overview functionality in

Re: [Pacemaker] CIB verification failure with any change via crmsh

2013-01-24 Thread Jacek Konieczny
Hi, On Wed, 23 Jan 2013 18:52:20 +0100 Dejan Muhamedagic deja...@fastmail.fm wrote: nodes node id=35956928 uname=sipc2n2 Note sure if id can start with a digit. Corosync node id's are always digits-only. This should really work with versions = v1.2.4 Yeah… I have looked into

Re: [Pacemaker] CIB verification failure with any change via crmsh

2013-01-24 Thread Jacek Konieczny
On Thu, 24 Jan 2013 09:04:14 +0100 Jacek Konieczny jaj...@jajcus.net wrote: I should probably upgrade my CIB somehow Indeed. 'cibadmin --upgrade --force' solved my problem. Thanks for all the hints. Greets, Jacek ___ Pacemaker mailing list

[Pacemaker] CIB verification failure with any change via crmsh

2013-01-23 Thread Jacek Konieczny
Hi, I have recently upgraded Pacemaker on one of my clusters from 1.0.something to 1.1.8 and installed crmsh to manage it as I used to. crmsh mostly works for me, until I try to change the configuration with 'crm configure'. Any, even trivial change shows verification errors and fails to commit:

Re: [Pacemaker] CIB verification failure with any change via crmsh

2013-01-23 Thread Jacek Konieczny
On Wed, 23 Jan 2013 16:44:45 +0100 Lars Marowsky-Bree l...@suse.com wrote: On 2013-01-23T16:31:20, Jacek Konieczny jaj...@jajcus.net wrote: I have recently upgraded Pacemaker on one of my clusters from 1.0.something to 1.1.8 and installed crmsh to manage it as I used to. It'd

Re: [Pacemaker] stonithd crash on exit

2012-10-31 Thread Jacek Konieczny
On Wed, Oct 31, 2012 at 05:33:03PM +1100, Andrew Beekhof wrote: I havent seen that before. What version? Pacemaker 1.1.8, corosync 2.1.0, cluster-glue 1.0.11 On Wed, Oct 31, 2012 at 12:42 AM, Jacek Konieczny jaj...@jajcus.net wrote: Hello, Probably this is not a critical problem

[Pacemaker] stonithd crash on exit

2012-10-30 Thread Jacek Konieczny
Hello, Probably this is not a critical problem, but it become annoying during my cluster setup/testing time: Whenever I restart corosync with 'systemctl restart corosync.service' I get message about stonithd crashing with SIGSEGV: stonithd[3179]: segfault at 10 ip 00403144 sp