Re: [Pacemaker] PostgreSQL replicarion RA: PGSQL.lock

2013-02-14 Thread Takatoshi MATSUO
Hi

2013/2/13 Andrew ni...@seti.kr.ua:
 12.02.2013 02:35, Takatoshi MATSUO пишет:

 Hi

 2013/2/9 Andrew ni...@seti.kr.ua:

 Hi all.
 For what reason is implemented PGSQL.lock in RA, and what pbs may happen
 if
 it'll be removed from RA code?

 It may cause data inconsistency.
 If the file exists in a node, you need to copy data from new master.

 I noticed that during master migration lock still remains and postgresql
 isn't started on old master; demote also will fail with lock file. Also, if
 cluster fails (for ex., power failure occurs), old master will not start,
 and slave after startup will be promoted to master - it's OK when both nodes
 are crashed simultaneously, and it's really bad when old slave was crashed
 earlier. If postgres crashed/killed by OOM/etc - it also will not be
 restarted...

The existence of lock file dose not necessarily mean that data is inconsistent.
RA can't know detail data status.

If you know that data is valid, you can delete the lock file and clear
failcount.


 Maybe it'll be better to watch log files on slave that tries to sync with
 master/to check slave timeline, and if slave can't sync with error that
 timeline differs - to fail it with error (or even to sync with master with
 pg_basebackup - it supports connection to remote server and works quick:
 http://sharingtechknowledge.blogspot.com/2011/12/postgresql-pgbasebackup-forget-about.html
 - example)?



 Also, 2nd question: how I can prevent pgsql RA from promoting master
 before
 both nodes will brings up OR before timeout is reached (for ex., if 2nd
 node
 is dead)?

 You can use xlog_check_count parameter set up with a large number.
 RA retries comparing data with specified number of times in Slave.

 Thanks; I'll try this.

 Or you can use target-role such as below too.
 
 ms msPostgresql pgsql \
  meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true target-role=Slave
 ---

 In that case, how can I choose on what node I should promote resource to
 master (which has fresher WAL position) - I should do this manually, or I
 can just run promote?


In master/slave configuration, RA decides which node can be promoted
using master-score
and Pacemaker promotes it considering colocation, order, rule and so on.
So you can't promote it manually.

But as far as pgsql RA goes, you can do it such as below

1. stop all pacemakers
2. clear all settings of pacemaker such as rm
/var/lib/heartbeat/crm/cib* in both nodes.
3. start pacemaker in one server which should be Master.
   - RA certainly increments master-score in Slave and PostgreSQL is promoted
 because there is no pgsql-data-status and no other node.


 I think that clone of Delay RA for both nodes (to avoid switching
 delay on node with Delay resource failure) will do this - but I can't
 find
 how I should write such rule, it seems like rules in Pacemaker are too
 simple (or too bad described). I expect something like this (in  braces
 I
 included rules that I don't know how to write):

 location pgsql_master_location ms_Postgresql \
  rule $role=master -inf: Delay RA is not running and
 ms_Postgresql
 count eq 1

 Is this possible?


Thanks,
Takatoshi MATSUO

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] PostgreSQL replicarion RA: PGSQL.lock

2013-02-14 Thread Andrew
14.02.2013 10:03, Takatoshi MATSUO пишет:
 Hi

 2013/2/13 Andrew ni...@seti.kr.ua:
 12.02.2013 02:35, Takatoshi MATSUO пишет:

 Hi

 2013/2/9 Andrew ni...@seti.kr.ua:
 Hi all.
 For what reason is implemented PGSQL.lock in RA, and what pbs may happen
 if
 it'll be removed from RA code?
 It may cause data inconsistency.
 If the file exists in a node, you need to copy data from new master.
 I noticed that during master migration lock still remains and postgresql
 isn't started on old master; demote also will fail with lock file. Also, if
 cluster fails (for ex., power failure occurs), old master will not start,
 and slave after startup will be promoted to master - it's OK when both nodes
 are crashed simultaneously, and it's really bad when old slave was crashed
 earlier. If postgres crashed/killed by OOM/etc - it also will not be
 restarted...
 The existence of lock file dose not necessarily mean that data is 
 inconsistent.
 RA can't know detail data status.

 If you know that data is valid, you can delete the lock file and clear
 failcount.
Really - RA can check last log replay, and choose behaviour (to start
old 'master' as master if it's log position is ahead 'old-slave' one, or
to fail/try to start as slave and fail if it isn't synced at timeout/to
force sync if it's log position is behind 'old-slave' one)
 Maybe it'll be better to watch log files on slave that tries to sync with
 master/to check slave timeline, and if slave can't sync with error that
 timeline differs - to fail it with error (or even to sync with master with
 pg_basebackup - it supports connection to remote server and works quick:
 http://sharingtechknowledge.blogspot.com/2011/12/postgresql-pgbasebackup-forget-about.html
 - example)?


 Also, 2nd question: how I can prevent pgsql RA from promoting master
 before
 both nodes will brings up OR before timeout is reached (for ex., if 2nd
 node
 is dead)?
 You can use xlog_check_count parameter set up with a large number.
 RA retries comparing data with specified number of times in Slave.
 Thanks; I'll try this.

 Or you can use target-role such as below too.
 
 ms msPostgresql pgsql \
  meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true target-role=Slave
 ---
 In that case, how can I choose on what node I should promote resource to
 master (which has fresher WAL position) - I should do this manually, or I
 can just run promote?

 In master/slave configuration, RA decides which node can be promoted
 using master-score
 and Pacemaker promotes it considering colocation, order, rule and so on.
 So you can't promote it manually.

 But as far as pgsql RA goes, you can do it such as below

 1. stop all pacemakers
 2. clear all settings of pacemaker such as rm
 /var/lib/heartbeat/crm/cib* in both nodes.
 3. start pacemaker in one server which should be Master.
- RA certainly increments master-score in Slave and PostgreSQL is promoted
  because there is no pgsql-data-status and no other node.

Ok, thanks. I'm not too familiar with pacemaker, so some operation
details are still hidden from me.

But for master migration there is much easier solution: to migrate
collocated master IP.

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread Rasto Levrinc
On Thu, Feb 14, 2013 at 12:20 AM, Ron Kerry rke...@sgi.com wrote:
 I am not sure if this is an appropriate question for a community forum since
 it is a RHEL specific question. However, I cannot think of a better forum to
 use (as someone coming from a heavy SLES background), so I will ask it
 anyway. Feel free to shoot me down or point me in a different direction.

 I do not find the pacemaker GUI in any of the RHEL6 HA distribution rpms. I
 have tried to think of all of its various names crm_gui, hb_gui,
 mgmt/haclient etc, but I have not found it. A simple Google search also was
 not helpful - perhaps due to me not being sufficient skilled at search
 techniques. Is it available somewhere in the RHEL7 HA distribution and I am
 just not finding it? Or do I need to build it from source or pull some
 community built rpm off the web.

I am also not aware of any crm_gui packages for rhel6 not even community
build. But you should be able to compile it on rhel6 from here

https://github.com/ClusterLabs/pacemaker-mgmt

Luckily there are many alternative GUIs, but only 1 or 2 really usable.

In theory you can get crmsh package from here

http://download.opensuse.org/repositories/network:/ha-clustering/

I don't see HAWK package there, so probably it's still not compatible with
the rhel 6 Ruby version at this moment.

Then there's pcs-gui, but last time I've checked it wasn't ready.

Last but not least there's the LCMC, that http://lcmc.sf.net, that you
install on your desktop computer whatever it is and configure and manage the
cluster remotely via SSH from there.

Rasto

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Reason for cluster resource migration

2013-02-14 Thread Ante Karamatić
On 13.02.2013 16:27, Andrew Martin wrote::

 Unfortunately the pacemaker and corosync packages in the Ubuntu 
 repositories are too old. Due to bugs in these versions, I 
 upgraded to the latest Pacemaker 1.1.8 and Corosync 2.1.0 (it was
 the latest at that time).

We tend to backport security fixes and nasty bugs to older versions
(those we have in distribution). We don't pull in new version cause new
versions bring new features and thus new bugs. But I'm sure I'll get
slapped for saying that on upstream mailing list :)

 Are there newer versions of these packages
 available in a PPA or somewhere? I have been working to build them 
 on my own, but the way that Ubuntu separates out the single source 
 package into many binary packages is making it difficult. 

In most of the cases, very simple procedure goes without problems:

wget http://upstream.com/supertool-new-version.tar.gz
apt-get source supertool
sudo apt-get build-dep supertool
cd supertool...
uupdate ../supertool-new-version.tar.gz
cd ../supertool-new-version
fakeroot dpkg-buildpackage

In case you want debugging symbols, instead of just building the
package, build it with 'special' env variable:

DEB_BUILD_OPTIONS=nostrip dpkg-buildpackage

You could also fetch source from 12.10 release and build those packages
on 12.04.

-- 
Ante Karamatic ante.karama...@canonical.com
Professional and Engineering Services
Canonical Ltd

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Node fails to rejoin cluster

2013-02-14 Thread Proskurin Kirill

On 02/08/2013 04:59 AM, Andrew Beekhof wrote:


Suggests it s a bug that got fixed recently.  Keep an eye out for
1.1.9 in the next week or so (or you could try building from source if
you're in a hurry).


Is 1.1.9 will be centos 5.x friendly?

--
Best regards,
Proskurin Kirill

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread Lars Marowsky-Bree
On 2013-02-13T18:20:31, Ron Kerry rke...@sgi.com wrote:

 I am not sure if this is an appropriate question for a community forum since
 it is a RHEL specific question. However, I cannot think of a better forum to
 use (as someone coming from a heavy SLES background), so I will ask it
 anyway. Feel free to shoot me down or point me in a different direction.

Hi Ron,

I'm not going to comment on RHEL nor pcs/crmsh.

But even on SLES, we've made a decision to focus on crmsh + hawk as a
web-based GUI. So I'd suggest to investigate that rather than crm_gui.
The benefit is that it doesn't require an X server, and works from all
systems capable of hosting a web browser.


Regards,
Lars

-- 
Architect Storage/HA
SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer, HRB 
21284 (AG Nürnberg)
Experience is the name everyone gives to their mistakes. -- Oscar Wilde


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread Cristiane França
hello,
I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS 6.3
(kernel 2.6.32-279.19.1 - 64 bits).
I'm having the following problem:
The Pacemaker is not automatically mounting the DRBD partitions or setting
which is the main machine.
Where is configured to mount the partitions?

my server configuration:

node primario
node secundario
primitive ClusterIP ocf:heartbeat:IPaddr2 \
params ip=192.168.0.110 cidr_netmask=32 \
op monitor interval=30s
primitive database_fs ocf:heartbeat:Filesystem \
params device=/dev/drbd3 directory=/database fstype=ext4
primitive drbd_database ocf:linbit:drbd \
params drbd_resource=drbd3 \
op monitor interval=15s
primitive drbd_home ocf:linbit:drbd \
params drbd_resource=drbd1 \
op monitor interval=15s
primitive drbd_sistema ocf:linbit:drbd \
params drbd_resource=drbd2 \
op monitor interval=15s
primitive home_fs ocf:heartbeat:Filesystem \
params device=/dev/drbd1 directory=/home fstype=ext4
primitive sistema_fs ocf:heartbeat:Filesystem \
params device=/dev/drbd2 directory=/sistema fstype=ext4
ms ms_drbd_database drbd_database \
meta master-max=1 master-node-max=1 clone-max=2
clone-node-max=1 notify=true
ms ms_drbd_home drbd_home \
meta master-max=1 master-node-max=1 clone-max=2
clone-node-max=1 notify=true
ms ms_drbd_sistema drbd_sistema \
meta master-max=1 master-node-max=1 clone-max=2
clone-node-max=1 notify=true
colocation database_on_drbd inf: database_fs ms_drbd_database:Master
colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
order database_after_drbd inf: ms_drbd_database:promote database_fs:start
order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
property $id=cib-bootstrap-options \
dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
cluster-infrastructure=openais \
stonith-enabled=false \
no-quorum-policy=ignore \
expected-quorum-votes=2 \
last-lrm-refresh=1360756132
rsc_defaults $id=rsc-options \
resource-stickiness=100





Last updated: Thu Feb 14 10:21:47 2013
Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
Stack: openais
Current DC: primario - partition with quorum
Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
2 Nodes configured, 2 expected votes
10 Resources configured.


Online: [ secundario primario ]

 ClusterIP (ocf::heartbeat:IPaddr2): Started primario
 Master/Slave Set: ms_drbd_home [drbd_home]
 drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
 drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 Master/Slave Set: ms_drbd_sistema [drbd_sistema]
 drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
 Master/Slave Set: ms_drbd_database [drbd_database]
 drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED

Failed actions:
drbd_database:0_stop_0 (node=primario, call=23, rc=5, status=complete):
not installed
drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete): not
installed
drbd_sistema:0_stop_0 (node=primario, call=22, rc=5, status=complete):
not installed
drbd_home:0_stop_0 (node=secundario, call=18, rc=5, status=complete):
not installed
drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
status=complete): not installed
drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
status=complete): not installed



I'm sorry for my English.
Cristiane
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread emmanuel segura
Hello Cristiane

can you post your cluster logs and your drbd config

Thanks
2013/2/14 Cristiane França cristianedefra...@gmail.com

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS 6.3
 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or setting
 which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
 order database_after_drbd inf: ms_drbd_database:promote database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \
 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 drbd_database:0_stop_0 (node=primario, call=23, rc=5,
 status=complete): not installed
 drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete): not
 installed
 drbd_sistema:0_stop_0 (node=primario, call=22, rc=5, status=complete):
 not installed
 drbd_home:0_stop_0 (node=secundario, call=18, rc=5, status=complete):
 not installed
 drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
 status=complete): not installed
 drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
 status=complete): not installed



 I'm sorry for my English.
 Cristiane


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread Felipe Gutierrez
Hi Cristiane,

I am new on pacemaker.
To mount the partitions automaticly you are using ocf:heartbeat:Filesystem.
On my configuration I create folders at /mnt/myPartition.

This is my configuration:

# crm configure property no-quorum-policy=ignore
# crm configure property stonith-enabled=false
primitive net_conn ocf:pacemaker:ping params pidfile=/var/run/ping.pid
host_list=192.168.188.1 op start interval=0 timeout=60s op stop
interval=0 timeout=20s op monitor interval=10s timeout=60s
clone clone_net_conn net_conn meta clone-node-max=1 clone-max=2
primitive cluster_ip ocf:heartbeat:IPaddr2 params ip=192.168.188.20
cidr_netmask=32 op monitor interval=10s
primitive cluster_mon ocf:pacemaker:ClusterMon params
pidfile=/var/run/crm_mon.pid htmlfile=/var/tmp/crm_mon.html op start
interval=0 timeout=20s op stop interval=0 timeout=20s op monitor
interval=10s timeout=20s
primitive drbd_r8 ocf:linbit:drbd params drbd_resource=r8 op monitor
interval=60s role=Master op monitor interval=59s role=Slave
ms drbd_r8_ms drbd_r8 meta master-max=1 master-node-max=1 clone-max=2
clone-node-max=1 notify=true
location ms_drbd_r8-no-conn drbd_r8_ms rule $id=ms_drbd_r8-no-conn-rule
$role=Master -inf: not_defined pingd or pingd number:lte 0
primitive drbd_r8_fs ocf:heartbeat:Filesystem params device=/dev/drbd8
directory=/mnt/drbd8 fstype=ext3
colocation fs_on_drbd inf: drbd_r8_fs drbd_r8_ms:Master
order fs_after_drbd inf: drbd_r8_ms:promote drbd_r8_fs:start
colocation coloc_mgmt inf: cluster_ip cluster_mon
colocation coloc_ms_ip inf: drbd_r8_ms:Master cluster_ip


Felipe

On Thu, Feb 14, 2013 at 11:23 AM, Cristiane França 
cristianedefra...@gmail.com wrote:

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS 6.3
 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or setting
 which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
 order database_after_drbd inf: ms_drbd_database:promote database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \
 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 

Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread Cristiane França
Hello Emmanuel,

My drbd.conf:

global {
usage-count yes;
}

common {
syncer { rate 100M; }
protocol C;
}

resource home {
meta-disk internal;
device  /dev/drbd1;
startup {
wfc-timeout 0;  ## Infinite!
degr-wfc-timeout 120;   ## 2 minutes
}
disk {
on-io-error   detach;
}
net {
}
syncer {
rate 100M;
}
on primario {
disk   /dev/sdb1;
address  10.0.0.10:7767;
}
on secundario {
disk   /dev/sdb1;
address  10.0.0.20:7767;
}
}

resource sistema {
meta-disk internal;
device  /dev/drbd2;
handlers {
pri-on-incon-degr echo o  /proc/sysrq-trigger ; halt -f;
pri-lost-after-sb echo o  /proc/sysrq-trigger ; halt -f;
local-io-error echo o  /proc/sysrq-trigger ; halt -f;
outdate-peer /usr/sbin/drbd-peer-outdater;
}
startup {
degr-wfc-timeout 120;
}
disk {
on-io-error   detach;
}
net {
after-sb-0pri disconnect;
after-sb-1pri disconnect;
after-sb-2pri disconnect;
rr-conflict disconnect;
}
syncer {
rate 100M;
al-extents 257;
}
on primario {
disk   /dev/sdb2;
address  10.0.0.10:7768;
}
on secundario {
disk   /dev/sdb2;
address  10.0.0.20:7768;
}
}


resource database {
meta-disk internal;
device  /dev/drbd3;
startup {
wfc-timeout 0;  ## Infinite!
degr-wfc-timeout 120;   ## 2 minutes
}
disk {
on-io-error   detach;
}
net {
}
syncer {
rate 100M;
}
on primario {
disk   /dev/sdb3;
address  10.0.0.10:7769;
}
on secundario {
disk   /dev/sdb3;
address  10.0.0.20:7769;
}
}



My LOG /var/log/cluster/corosync.log :

Feb 14 10:04:27 [10866] primariopengine:  warning:
common_apply_stickiness: Forcing ms_drbd_database away from
primario after 100 failures (max=100)
Feb 14 10:04:27 [10866] primariopengine:  warning:
common_apply_stickiness: Forcing ms_drbd_database away from
primario after 100 failures (max=100)
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_home:0_pre_notify_stop_0 comeplete
before ms_drbd_home_confirmed-pre_notify_stop_0: unmanaged failed resources
cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_home:1_pre_notify_stop_0 comeplete
before ms_drbd_home_confirmed-pre_notify_stop_0: unmanaged failed resources
cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_home:0_stop_0 comeplete before
ms_drbd_home_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_home:1_stop_0 comeplete before
ms_drbd_home_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_sistema:0_pre_notify_stop_0 comeplete
before ms_drbd_sistema_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_sistema:1_pre_notify_stop_0 comeplete
before ms_drbd_sistema_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_sistema:0_stop_0 comeplete before
ms_drbd_sistema_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_sistema:1_stop_0 comeplete before
ms_drbd_sistema_stopped_0: unmanaged failed resources cannot prevent clone
shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_database:0_pre_notify_stop_0 comeplete
before ms_drbd_database_confirmed-pre_notify_stop_0: unmanaged failed
resources cannot prevent clone shutdown
Feb 14 10:04:27 [10866] primariopengine:  warning: should_dump_input:
Ignoring requirement that drbd_database:1_pre_notify_stop_0 comeplete
before 

Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread emmanuel segura
Hello Cristiane

I think your pacemaker config doesn't call the resource defined in your
drbd config

2013/2/14 Cristiane França cristianedefra...@gmail.com

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS 6.3
 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or setting
 which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
 order database_after_drbd inf: ms_drbd_database:promote database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \
 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 drbd_database:0_stop_0 (node=primario, call=23, rc=5,
 status=complete): not installed
 drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete): not
 installed
 drbd_sistema:0_stop_0 (node=primario, call=22, rc=5, status=complete):
 not installed
 drbd_home:0_stop_0 (node=secundario, call=18, rc=5, status=complete):
 not installed
 drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
 status=complete): not installed
 drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
 status=complete): not installed



 I'm sorry for my English.
 Cristiane


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker

 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread Dejan Muhamedagic
On Thu, Feb 14, 2013 at 10:46:40AM +0100, Rasto Levrinc wrote:
 On Thu, Feb 14, 2013 at 12:20 AM, Ron Kerry rke...@sgi.com wrote:
  I am not sure if this is an appropriate question for a community forum since
  it is a RHEL specific question. However, I cannot think of a better forum to
  use (as someone coming from a heavy SLES background), so I will ask it
  anyway. Feel free to shoot me down or point me in a different direction.
 
  I do not find the pacemaker GUI in any of the RHEL6 HA distribution rpms. I
  have tried to think of all of its various names crm_gui, hb_gui,
  mgmt/haclient etc, but I have not found it. A simple Google search also was
  not helpful - perhaps due to me not being sufficient skilled at search
  techniques. Is it available somewhere in the RHEL7 HA distribution and I am
  just not finding it? Or do I need to build it from source or pull some
  community built rpm off the web.
 
 I am also not aware of any crm_gui packages for rhel6 not even community
 build. But you should be able to compile it on rhel6 from here
 
 https://github.com/ClusterLabs/pacemaker-mgmt
 
 Luckily there are many alternative GUIs, but only 1 or 2 really usable.
 
 In theory you can get crmsh package from here
 
 http://download.opensuse.org/repositories/network:/ha-clustering/

In practice too :) Every new version of crmsh is going to be
available there for the selected platforms. Along with
resource-agents, cluster-glue, etc.

 I don't see HAWK package there, so probably it's still not compatible with
 the rhel 6 Ruby version at this moment.

Right, hawk is not built. Tim should be able to tell why.

Cheers,

Dejan

 Then there's pcs-gui, but last time I've checked it wasn't ready.
 
 Last but not least there's the LCMC, that http://lcmc.sf.net, that you
 install on your desktop computer whatever it is and configure and manage the
 cluster remotely via SSH from there.
 
 Rasto
 
 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 http://oss.clusterlabs.org/mailman/listinfo/pacemaker
 
 Project Home: http://www.clusterlabs.org
 Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
 Bugs: http://bugs.clusterlabs.org

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread E-Blokos


- Original Message - 
From: Dejan Muhamedagic deja...@fastmail.fm

To: The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
Sent: Thursday, February 14, 2013 9:53 AM
Subject: Re: [Pacemaker] Is crm_gui available under RHEL6?



On Thu, Feb 14, 2013 at 10:46:40AM +0100, Rasto Levrinc wrote:

On Thu, Feb 14, 2013 at 12:20 AM, Ron Kerry rke...@sgi.com wrote:
 I am not sure if this is an appropriate question for a community forum 
 since
 it is a RHEL specific question. However, I cannot think of a better 
 forum to

 use (as someone coming from a heavy SLES background), so I will ask it
 anyway. Feel free to shoot me down or point me in a different 
 direction.


 I do not find the pacemaker GUI in any of the RHEL6 HA distribution 
 rpms. I

 have tried to think of all of its various names crm_gui, hb_gui,
 mgmt/haclient etc, but I have not found it. A simple Google search also 
 was

 not helpful - perhaps due to me not being sufficient skilled at search
 techniques. Is it available somewhere in the RHEL7 HA distribution and 
 I am

 just not finding it? Or do I need to build it from source or pull some
 community built rpm off the web.

I am also not aware of any crm_gui packages for rhel6 not even community
build. But you should be able to compile it on rhel6 from here

https://github.com/ClusterLabs/pacemaker-mgmt

Luckily there are many alternative GUIs, but only 1 or 2 really usable.

In theory you can get crmsh package from here

http://download.opensuse.org/repositories/network:/ha-clustering/


In practice too :) Every new version of crmsh is going to be
available there for the selected platforms. Along with
resource-agents, cluster-glue, etc.

I don't see HAWK package there, so probably it's still not compatible 
with

the rhel 6 Ruby version at this moment.


Right, hawk is not built. Tim should be able to tell why.

Cheers,

Dejan


Then there's pcs-gui, but last time I've checked it wasn't ready.

Last but not least there's the LCMC, that http://lcmc.sf.net, that you
install on your desktop computer whatever it is and configure and manage 
the

cluster remotely via SSH from there.

Rasto




I start to be nostalgic of 3 years ago pacemaker openais spirit
and version 1.0.5
Today I confirm that for now it's risky to use pacemaker on Fedora18 (don't 
know other distro)
I just registered 2 nodes without any resources and systemd complaints 
everyday with corosync.

softwares state reproduce developers emotion

Franck






___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread E-Blokos


I am not sure if this is an appropriate question for a community forum 
since
it is a RHEL specific question. However, I cannot think of a better forum 
to

use (as someone coming from a heavy SLES background), so I will ask it
anyway. Feel free to shoot me down or point me in a different direction.


Hi Ron,

I'm not going to comment on RHEL nor pcs/crmsh.

But even on SLES, we've made a decision to focus on crmsh + hawk as a
web-based GUI. So I'd suggest to investigate that rather than crm_gui.
The benefit is that it doesn't require an X server, and works from all
systems capable of hosting a web browser.


Regards,
   Lars

___


yes it's a good idea, at least when ruby is not complaining.
on F18 standard install impossible to make it work.

Regards

Franck


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


[Pacemaker] return properties and rsc_defaults back to default values

2013-02-14 Thread Brian J. Murrell
Is there a way to return an individual property (or all properties)
and/or a rsc_default (or all) back to default values, using crm, or
otherwise?

Cheers,
b.



signature.asc
Description: OpenPGP digital signature
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread Cristiane França
Hi,
I configured resources with options is-managed=true.

crm(live)configure# edit ms_drbd_home
ms ms_drbd_home drbd_home \
meta is-managed=true master-max=1 master-node-max=1
clone-max=2 clone-node-max=1 notify=true



But the problem remains :


 Master/Slave Set: ms_drbd_home [drbd_home]
 drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 Stopped: [ drbd_home:0 ]
 Master/Slave Set: ms_drbd_sistema [drbd_sistema]
 drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 Stopped: [ drbd_sistema:1 ]
 Master/Slave Set: ms_drbd_database [drbd_database]
 drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
 Stopped: [ drbd_database:1 ]


regards,



On Thu, Feb 14, 2013 at 11:21 AM, emmanuel segura emi2f...@gmail.comwrote:

 Hello Cristiane

 I think your pacemaker config doesn't call the resource defined in your
 drbd config

 2013/2/14 Cristiane França cristianedefra...@gmail.com

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS
 6.3 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or
 setting which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
  order database_after_drbd inf: ms_drbd_database:promote database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \
 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 drbd_database:0_stop_0 (node=primario, call=23, rc=5,
 status=complete): not installed
 drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete):
 not installed
 drbd_sistema:0_stop_0 (node=primario, call=22, rc=5,
 status=complete): not installed
 drbd_home:0_stop_0 (node=secundario, call=18, rc=5, status=complete):
 not installed
 drbd_sistema:1_stop_0 (node=secundario, call=20, rc=5,
 status=complete): not installed
 drbd_database:1_stop_0 (node=secundario, call=19, rc=5,
 status=complete): not installed



 I'm sorry for my English.
 Cristiane


 ___
 Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
 

Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread emmanuel segura
Hello Cristiane

You need to change your pacemaker config(drbd primitive) like this

example:

primitive drbd_home ocf:linbit:drbd \
params drbd_resource=home \
op monitor interval=15s

In drbd_resource parameter put the name of your drbd resource, not the name
of devices

Thanks

2013/2/14 Cristiane França cristianedefra...@gmail.com

 Hi,
 I configured resources with options is-managed=true.

 crm(live)configure# edit ms_drbd_home
 ms ms_drbd_home drbd_home \
 meta is-managed=true master-max=1 master-node-max=1
 clone-max=2 clone-node-max=1 notify=true



 But the problem remains :


  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Stopped: [ drbd_home:0 ]
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Stopped: [ drbd_sistema:1 ]
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Stopped: [ drbd_database:1 ]


 regards,



 On Thu, Feb 14, 2013 at 11:21 AM, emmanuel segura emi2f...@gmail.comwrote:

 Hello Cristiane

 I think your pacemaker config doesn't call the resource defined in your
 drbd config

 2013/2/14 Cristiane França cristianedefra...@gmail.com

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS
 6.3 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or
 setting which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
  order database_after_drbd inf: ms_drbd_database:promote
 database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \

 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged) FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 drbd_database:0_stop_0 (node=primario, call=23, rc=5,
 status=complete): not installed
 drbd_home:1_stop_0 (node=primario, call=8, rc=5, status=complete):
 not installed
 drbd_sistema:0_stop_0 (node=primario, call=22, rc=5,
 status=complete): not installed
 drbd_home:0_stop_0 (node=secundario, call=18, rc=5,
 

Re: [Pacemaker] return properties and rsc_defaults back to default values

2013-02-14 Thread Andreas Kurz
Hi Brian,

On 2013-02-14 16:48, Brian J. Murrell wrote:
 Is there a way to return an individual property (or all properties)
 and/or a rsc_default (or all) back to default values, using crm, or
 otherwise?

You mean beside deleting it?

Cheers,
Andreas

-- 
Need help with Pacemaker?
http://www.hastexo.com/now


___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Pacemaker is not automatically mounting the DRBD partitions

2013-02-14 Thread Cristiane França
Hi Emmanuel,

Thank you very much!
I changed my pacemaker config as you suggested and the problem was solved.

Thanks.
Cristiane


On Thu, Feb 14, 2013 at 4:38 PM, emmanuel segura emi2f...@gmail.com wrote:

 Hello Cristiane

 You need to change your pacemaker config(drbd primitive) like this

 example:


 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=home \
 op monitor interval=15s

 In drbd_resource parameter put the name of your drbd resource, not the
 name of devices

 Thanks


 2013/2/14 Cristiane França cristianedefra...@gmail.com

 Hi,
 I configured resources with options is-managed=true.

 crm(live)configure# edit ms_drbd_home
 ms ms_drbd_home drbd_home \
 meta is-managed=true master-max=1 master-node-max=1
 clone-max=2 clone-node-max=1 notify=true



 But the problem remains :


  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Stopped: [ drbd_home:0 ]
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Stopped: [ drbd_sistema:1 ]
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  Stopped: [ drbd_database:1 ]


 regards,



 On Thu, Feb 14, 2013 at 11:21 AM, emmanuel segura emi2f...@gmail.comwrote:

 Hello Cristiane

 I think your pacemaker config doesn't call the resource defined in your
 drbd config

  2013/2/14 Cristiane França cristianedefra...@gmail.com

 hello,
 I installed Pacemaker (1.1.7-6) and DRBD (8.4.2-2) on my server CentOS
 6.3 (kernel 2.6.32-279.19.1 - 64 bits).
 I'm having the following problem:
 The Pacemaker is not automatically mounting the DRBD partitions or
 setting which is the main machine.
 Where is configured to mount the partitions?

 my server configuration:

 node primario
 node secundario
 primitive ClusterIP ocf:heartbeat:IPaddr2 \
 params ip=192.168.0.110 cidr_netmask=32 \
 op monitor interval=30s
 primitive database_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd3 directory=/database fstype=ext4
 primitive drbd_database ocf:linbit:drbd \
 params drbd_resource=drbd3 \
 op monitor interval=15s
 primitive drbd_home ocf:linbit:drbd \
 params drbd_resource=drbd1 \
 op monitor interval=15s
 primitive drbd_sistema ocf:linbit:drbd \
 params drbd_resource=drbd2 \
 op monitor interval=15s
 primitive home_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd1 directory=/home fstype=ext4
 primitive sistema_fs ocf:heartbeat:Filesystem \
 params device=/dev/drbd2 directory=/sistema fstype=ext4
 ms ms_drbd_database drbd_database \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_home drbd_home \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 ms ms_drbd_sistema drbd_sistema \
 meta master-max=1 master-node-max=1 clone-max=2
 clone-node-max=1 notify=true
 colocation database_on_drbd inf: database_fs ms_drbd_database:Master
 colocation fs_on_drbd inf: home_fs ms_drbd_home:Master
 colocation sistema_on_drbd inf: sistema_fs ms_drbd_sistema:Master
  order database_after_drbd inf: ms_drbd_database:promote
 database_fs:start
 order fs_after_drbd inf: ms_drbd_home:promote home_fs:start
 order sistema_after_drbd inf: ms_drbd_sistema:promote sistema_fs:start
 property $id=cib-bootstrap-options \

 dc-version=1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14 \
 cluster-infrastructure=openais \
 stonith-enabled=false \
 no-quorum-policy=ignore \
 expected-quorum-votes=2 \
 last-lrm-refresh=1360756132
 rsc_defaults $id=rsc-options \
 resource-stickiness=100




 
 Last updated: Thu Feb 14 10:21:47 2013
 Last change: Thu Feb 14 09:45:16 2013 via cibadmin on primario
 Stack: openais
 Current DC: primario - partition with quorum
 Version: 1.1.7-6.el6-148fccfd5985c5590cc601123c6c16e966b85d14
 2 Nodes configured, 2 expected votes
 10 Resources configured.
 

 Online: [ secundario primario ]

  ClusterIP (ocf::heartbeat:IPaddr2): Started primario
  Master/Slave Set: ms_drbd_home [drbd_home]
  drbd_home:0 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  drbd_home:1 (ocf::linbit:drbd): Slave primario (unmanaged) FAILED
  Master/Slave Set: ms_drbd_sistema [drbd_sistema]
  drbd_sistema:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  drbd_sistema:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED
  Master/Slave Set: ms_drbd_database [drbd_database]
  drbd_database:0 (ocf::linbit:drbd): Slave primario (unmanaged)
 FAILED
  drbd_database:1 (ocf::linbit:drbd): Slave secundario (unmanaged)
 FAILED

 Failed actions:
 drbd_database:0_stop_0 (node=primario, call=23, rc=5,
 status=complete): not installed
 

Re: [Pacemaker] Node fails to rejoin cluster

2013-02-14 Thread Andrew Beekhof
On Thu, Feb 14, 2013 at 9:34 PM, Proskurin Kirill
k.prosku...@corp.mail.ru wrote:
 On 02/08/2013 04:59 AM, Andrew Beekhof wrote:

 Suggests it s a bug that got fixed recently.  Keep an eye out for
 1.1.9 in the next week or so (or you could try building from source if
 you're in a hurry).


 Is 1.1.9 will be centos 5.x friendly?

Yep

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [Pacemaker] Is crm_gui available under RHEL6?

2013-02-14 Thread Andrew Beekhof
On Fri, Feb 15, 2013 at 2:14 AM, E-Blokos in...@e-blokos.com wrote:

 - Original Message - From: Dejan Muhamedagic deja...@fastmail.fm
 To: The Pacemaker cluster resource manager pacemaker@oss.clusterlabs.org
 Sent: Thursday, February 14, 2013 9:53 AM
 Subject: Re: [Pacemaker] Is crm_gui available under RHEL6?



 On Thu, Feb 14, 2013 at 10:46:40AM +0100, Rasto Levrinc wrote:

 On Thu, Feb 14, 2013 at 12:20 AM, Ron Kerry rke...@sgi.com wrote:
  I am not sure if this is an appropriate question for a community forum
   since
  it is a RHEL specific question. However, I cannot think of a better 
  forum to
  use (as someone coming from a heavy SLES background), so I will ask it
  anyway. Feel free to shoot me down or point me in a different 
  direction.
 
  I do not find the pacemaker GUI in any of the RHEL6 HA distribution 
  rpms. I
  have tried to think of all of its various names crm_gui, hb_gui,
  mgmt/haclient etc, but I have not found it. A simple Google search also
   was
  not helpful - perhaps due to me not being sufficient skilled at search
  techniques. Is it available somewhere in the RHEL7 HA distribution and
   I am
  just not finding it? Or do I need to build it from source or pull some
  community built rpm off the web.

 I am also not aware of any crm_gui packages for rhel6 not even community
 build. But you should be able to compile it on rhel6 from here

 https://github.com/ClusterLabs/pacemaker-mgmt

 Luckily there are many alternative GUIs, but only 1 or 2 really usable.

 In theory you can get crmsh package from here

 http://download.opensuse.org/repositories/network:/ha-clustering/


 In practice too :) Every new version of crmsh is going to be
 available there for the selected platforms. Along with
 resource-agents, cluster-glue, etc.

 I don't see HAWK package there, so probably it's still not compatible
 with
 the rhel 6 Ruby version at this moment.


 Right, hawk is not built. Tim should be able to tell why.

 Cheers,

 Dejan

 Then there's pcs-gui, but last time I've checked it wasn't ready.

 Last but not least there's the LCMC, that http://lcmc.sf.net, that you
 install on your desktop computer whatever it is and configure and manage
 the
 cluster remotely via SSH from there.

 Rasto



 I start to be nostalgic of 3 years ago pacemaker openais spirit
 and version 1.0.5
 Today I confirm that for now it's risky to use pacemaker on Fedora18 (don't
 know other distro)
 I just registered 2 nodes without any resources and systemd complaints
 everyday with corosync.
 softwares state reproduce developers emotion

Can you elborate?
I wasn't aware of any issues on fedora 18... I test there regularly

___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org