Re: [Linux-HA] Lots of configuration changes

2013-06-07 Thread Angel L. Mateo
	The problem seems to be fixed. I have changed nothing in my 
configuration, but we had a problem in our network (it seems to be a 
loop in the spanning tree of some vlans). Once the network problem was 
fixed, the cluster run without problems.


El 06/06/13 10:26, Angel L. Mateo escribió:

Hello,

 I have a two node cluster with cman (v3.1.7) and pacemaker (v1.1.6)
running in ubuntu 12.04 (as recommended in pacemaker documentation).
This cluster has been running without problems for more than a month
(since I isntalled it), but I'm having problems since I rebooted one
node yesterday (I rebooted the standby node).

 The problem I'm having is that cman is continously changing its
configuration. I can see with cman_tool status that the cluster
generation value is continously increased.

 In the logs there a lot of messages of the form:

Jun  6 09:29:49 myotis52 corosync[32048]:   [TOTEM ] A processor joined
or left the membership and a new membership was formed.
Jun  6 09:29:49 myotis52 corosync[32048]:   [CPG   ] chosen downlist:
sender r(0) ip(155.54.211.166) ; members(old:1 left:0)
Jun  6 09:29:49 myotis52 corosync[32048]:   [MAIN  ] Completed service
synchronization, ready to provide service.
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] CLM CONFIGURATION
CHANGE
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] New Configuration:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] r(0)
ip(155.54.211.166)
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Left:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Joined:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] CLM CONFIGURATION
CHANGE
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] New Configuration:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] r(0)
ip(155.54.211.166)
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Left:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Joined:

 This is happening even when I run cman in just one node.

 I have attached my cluster.conf file. In this file I'm using udpu
transport protocol because nodes are running in vmware infrastructure
where I had problems with multicast.

 I have checked ip configuration and it is correct: ips are
correctly configured in /etc/hosts and interfaces are working without
any (apparent) problem. I have also checked this same configuration but
without redundant ring (that is, without altname options and dlm
control=detect and rrp_mode=none) but with the same problem.

 Any help?



___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems




--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868887590
Fax: 86337
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


[Linux-HA] Lots of configuration changes

2013-06-06 Thread Angel L. Mateo

Hello,

	I have a two node cluster with cman (v3.1.7) and pacemaker (v1.1.6) 
running in ubuntu 12.04 (as recommended in pacemaker documentation). 
This cluster has been running without problems for more than a month 
(since I isntalled it), but I'm having problems since I rebooted one 
node yesterday (I rebooted the standby node).


	The problem I'm having is that cman is continously changing its 
configuration. I can see with cman_tool status that the cluster 
generation value is continously increased.


In the logs there a lot of messages of the form:

Jun  6 09:29:49 myotis52 corosync[32048]:   [TOTEM ] A processor joined 
or left the membership and a new membership was formed.
Jun  6 09:29:49 myotis52 corosync[32048]:   [CPG   ] chosen downlist: 
sender r(0) ip(155.54.211.166) ; members(old:1 left:0)
Jun  6 09:29:49 myotis52 corosync[32048]:   [MAIN  ] Completed service 
synchronization, ready to provide service.
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] CLM CONFIGURATION 
CHANGE

Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] New Configuration:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] 	r(0) 
ip(155.54.211.166)

Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Left:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Joined:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] CLM CONFIGURATION 
CHANGE

Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] New Configuration:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] 	r(0) 
ip(155.54.211.166)

Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Left:
Jun  6 09:29:51 myotis52 corosync[32048]:   [CLM   ] Members Joined:

This is happening even when I run cman in just one node.

	I have attached my cluster.conf file. In this file I'm using udpu 
transport protocol because nodes are running in vmware infrastructure 
where I had problems with multicast.


	I have checked ip configuration and it is correct: ips are correctly 
configured in /etc/hosts and interfaces are working without any 
(apparent) problem. I have also checked this same configuration but 
without redundant ring (that is, without altname options and dlm 
control=detect and rrp_mode=none) but with the same problem.


Any help?

--
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868889150
Fax: 86337
?xml version=1.0?
cluster config_version=4 name=pacemaker
  dlm protocol=sctp/
  fence_daemon clean_start=0 post_fail_delay=0 post_join_delay=3/
  clusternodes
clusternode name=myotis51 nodeid=1 votes=1
  altname name=myotis51_backup/
  fence
method name=pcmk-redirect
  device name=pcmk port=server1/
/method
  /fence
/clusternode
clusternode name=myotis52 nodeid=2 votes=1
  altname name=myotis52_backup/
  fence
method name=pcmk-redirect
  device name=pcmk port=server2/
/method
  /fence
/clusternode
  /clusternodes
  fencedevices
fencedevice name=pcmk agent=fence_pcmk/
  /fencedevices
  cman transport=udpu two_node=1 expected_votes=1 /
  totem rrp_mode=active /
  logging debug=off
   to_syslog=yes
   to_logfile=no
  /
/cluster
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

[Linux-HA] Is CLVM really needed in an active/passive cluster?

2013-04-22 Thread Angel L. Mateo
Hello,

I'm deploying a clustered pop/imap server with mailboxes stored in a 
SAN connected with fibre channel.

The problem I have is that I have firstly configured the cluster with 
CLVM, but with this I can't create snapshots of my volumes, which is 
required for backups.

But is this CLVM really necessary? Or it is enough to configure LVM 
with fencing and stonith?

-- 
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868889150
Fax: 86337
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems


Re: [Linux-HA] Is CLVM really needed in an active/passive cluster?

2013-04-22 Thread Angel L. Mateo
El 22/04/13 11:20, emmanuel segura escribió:
 Hello Angel

 In this thread
 http://comments.gmane.org/gmane.linux.redhat.release.rhel5/6395  you can
 find the answer to your question

In that thread I can see that cLVM does not support snapshot. There is 
a link pointing to dd as a workaround for the snapshot (but I have a 
10TB FS and dd is not an option).

So my question continue, is clvm really needed? Or legacy LVM with 
fencing and stonich mechanism would be enough?

-- 
Angel L. Mateo Martínez
Sección de Telemática
Área de Tecnologías de la Información
y las Comunicaciones Aplicadas (ATICA)
http://www.um.es/atica
Tfo: 868889150
Fax: 86337
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems