Re: [Linux-HA] Antw: Re: SLES11 SP2 HAE: problematic change for LVM RA

2013-12-02 Thread emmanuel segura
The idea behind use exclusive volume activation mode with clvmd was(i
think), have a vg active active and lvs opens just in one node, more lvm
metadata replicated on all cluster nodes, when you'll do a change like lvm
resize.

I have a redhat cluster with clvmd with vg active in exclusive mode, if you
add pv to your volume group, every cluster node knows about the new pv in
the vg, but anyway you cannot active the vg if it's active on other node, i
think clvmd is needed just for replicate the lvm metadata


2013/12/2 Ulrich Windl ulrich.wi...@rz.uni-regensburg.de

  Lars Marowsky-Bree l...@suse.com schrieb am 29.11.2013 um 13:48 in
 Nachricht
 20131129124833.gf22...@suse.de:
  On 2013-11-29T13:46:17, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
 wrote:
 
   I just did s/true/false/...
  
   Was that a clustered volume?
  Clusterd  exclusive=true ??
 
  No!
 
  Then it can't work. Exclusive activation only works for clustered volume
  groups, since it uses the DLM to protect against the VG being activated
  more than once in the cluster.
 Hi!

 Try it with resource-agents-3.9.4-0.26.84: it works; with
 resource-agents-3.9.5-0.6.26.11 it doesn't work ;-)

 You could argue that it never should have worked. Anyway: If you want to
 activate a VG on exactly one node you should not need cLVM; only if you man
 to activate the VG on multiple nodes (as for a cluster file system)...

 Reagrds,
 Ulrich


 ___
 Linux-HA mailing list
 Linux-HA@lists.linux-ha.org
 http://lists.linux-ha.org/mailman/listinfo/linux-ha
 See also: http://linux-ha.org/ReportingProblems




-- 
esta es mi vida e me la vivo hasta que dios quiera
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] pcs configuration issue

2013-12-02 Thread Chris Feist

On 11/26/2013 03:27 AM, Willi Fehler wrote:

Hello,

I'm trying to create the following setup in Pacemaker 1.1.10.

pcs property set no-quorum-policy=ignore
pcs property set stonith-enabled=false
pcs resource create drbd_mysql ocf:linbit:drbd drbd_resource=r0 op monitor
interval=60s
pcs resource master ms_drbd_mysql drbd_mysql master-max=1 master-node-max=1
clone-max=2 clone-node-max=1 notify=true
pcs resource create fs_mysql Filesystem device=/dev/drbd/by-res/r0
directory=/var/lib/mysql fstype=xfs options=noatime
pcs resource create ip_mysql IPaddr2 ip=192.168.0.12 cidr_netmask=32 op monitor
interval=20s
pcs resource create ping ocf:pacemaker:ping host_list=192.168.0.1
multiplier=100 dampen=10s op monitor interval=60s
pcs resource clone ping ping_clone globally-unique=false
pcs resource create mysqld ocf:heartbeat:mysql binary=/usr/sbin/mysqld
datadir=/var/lib/mysql config=/etc/my.cnf \
pid=/var/run/mysqld/mysqld.pid socket=/var/run/mysqld/mysqld.sock \
op monitor interval=15s timeout=30s op start interval=0 timeout=180s op
stop interval=0 timeout=300s
pcs resource group add mysql fs_mysql mysqld ip_mysql
pcs constraint colocation add mysql ms_drbd_mysql INFINITY with-rsc-role=Master
pcs constraint order promote ms_drbd_mysql then start mysql
pcs constraint location mysql rule pingd: defined ping


There are two issues here, first, there's a bug in pcs which doesn't recognize 
groups in location constraint rules and the second is that the pcs rule syntax 
is slightly different than crm.


You should be able to use this command with the latest upstream:
pcs constraint location mysql rule score=pingd defined pingd

Thanks,
Chris



The last line is not working:

[root@linsrv006 ~]# pcs constraint location mysql rule pingd: defined pingd
Error: 'mysql' is not a resource

By the way, can everybody verify the other lines? I'm very now to pcs. Here is
my old configuration.

crm configure
crm(live)configure#primitive drbd_mysql ocf:linbit:drbd \
params drbd_resource=r0 \
op monitor interval=10s role=Master \
op monitor interval=20s role=Slave \
op start interval=0 timeout=240 \
op stop interval=0 timeout=240
crm(live)configure#ms ms_drbd_mysql drbd_mysql \
meta master-max=1 master-node-max=1 \
clone-max=2 clone-node-max=1 \
notify=true target-role=Master
crm(live)configure#primitive fs_mysql ocf:heartbeat:Filesystem \
params device=/dev/drbd/by-res/r0
directory=/var/lib/mysql fstype=xfs options=noatime \
op start interval=0 timeout=180s \
op stop interval=0 timeout=300s \
op monitor interval=60s
crm(live)configure#primitive ip_mysql ocf:heartbeat:IPaddr2 \
params ip=192.168.0.92 cidr_netmask=24 \
op monitor interval=20
crm(live)configure#primitive ping_eth0 ocf:pacemaker:ping \
params host_list=192.168.0.1 multiplier=100 \
op monitor interval=10s timeout=20s \
op start interval=0 timeout=90s \
op stop interval=0 timeout=100s
crm(live)configure#clone ping_eth0_clone ping_eth0 \
meta globally-unique=false
crm(live)configure#primitive mysqld ocf:heartbeat:mysql \
params binary=/usr/sbin/mysqld
datadir=/var/lib/mysql config=/etc/my.cnf pid=/var/run/mysqld/mysqld.pid
socket=/var/run/mysqld/mysqld.sock \
op monitor interval=15s timeout=30s \
op start interval=0 timeout=180s \
op stop interval=0 timeout=300s \
meta target-role=Started
crm(live)configure#group mysql fs_mysql mysqld ip_mysql
crm(live)configure#location l_mysql_on_01 mysql 100: linsrv001.willi-net.local
crm(live)configure#location mysql-on-connected-node mysql \
rule $id=mysql-on-connected-node-rule -inf: not_defined
pingd or pingd lte 0
crm(live)configure#colocation mysql_on_drbd inf: mysql ms_drbd_mysql:Master
crm(live)configure#order mysql_after_drbd inf: ms_drbd_mysql:promote mysql:start
crm(live)configure#rsc_defaults $id=rsc-options \
   resource-stickiness=200
crm(live)configure#commit
crm(live)configure#exit

Thank you  Regards,
Willi
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems