On 12/07/11 03:36, Graham Rawolle wrote:

Hi all,

I am trying to run Corosync and Pacemaker onopenSUSE-11.4.

There are two machines, each with 2 network interfaces.

One interface on each is connected via a crossover cable on a private network 192.168.100.0 and the other interface is connected via a switch to our internal network 192.168.1.0.

Routing is set up so that 192.168.100.0/24 and multicast 224.0.0.0/4 is directed to eth1 (the private network).

*Question*- Does anyone have Corosync/Pacemaker running reliably on openSUSE in a production environment? If so, what versions or openSUSE, Corosunc and Pacemaker are you running?

Initially I was running Corosync 1.3.0-3.1-x86_64 and Pacemaker 1.1.5-3.2-x86_64. I managed to get Pacemaker started and partly configured but ran in to 100% CPU issues, apparently due to a bug in Corosync 1.3.0 which is fixed in version 1.3.2 or higher.

I have upgraded to Corosync 1.4.2-25.1 for openSUSE-11.4 from build.opensuse.org, but now cannot get Pacemaker to start.

*Question*- Does Corosync 1.4.2 support configuring the Pacemaker service with "ver:0"?

The openSUSE packaged versions of Pacemaker do not support configuring the Pacemaker service with "ver:1" (pacemakerd and pacemaker startup script are excluded).

Corosync seems to start and run fine. Both machines are seen and join the membership.

I get the following message in /var/log/messages :

Dec 06 15:55:59 corosync [ SERV ] Service failed to load 'pacemaker'.

Dec 06 15:55:59 corosync [ SERV ] Service failed to load 'pacemaker'.

Note that it is repeated twice.

*Question - *How can I get more info on why Pacemaker failed to start?

I have the following versions installed:

openSUE 11.4-1.8-x86_64

Cluster-glue 1.07-9.1-x86_64

Corosync 1.4.2-25.1-x86_64

Pacemaker 1.1.5-3.2-x86_64

Openais 1.1.4-8.1-x86_64 (installed because Pacemaker requires it)

===================================

My /etc/corosync/corosync.conf file :

=============

# Please read the corosync.conf.5 manual page

compatibility: whitetank

totem {

                version: 2

                secauth: off

                threads: 0

                interface {

                                ringnumber: 0

                                bindnetaddr: 192.168.100.0

                                mcastaddr: 226.94.1.1

                                mcastport: 5405

                                ttl: 1

                }

}

logging {

                fileline: off

                to_stderr: no

                to_logfile: yes

                to_syslog: yes

                logfile: /var/log/cluster/corosync.log

                debug: on

                timestamp: on

                logger_subsys {

                                subsys: AMF

                                debug: on

                }

}

amf {

                mode: disabled

}

=============

My /etc/corosync/service.d/pcmk file :

=============

service {

  # Load the Pacemaker Resource Manager

  name: pacemaker

  ver: 0

}

=============

Regards,

Graham Rawolle

Daintree Systems

Mawson Lakes, South Australia



_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

At one time I tried, but ran into quite a lot of problems. The SLES versions seemed to be less burdened by these. My guess is that the real development and testing (of course) happens there.

Personally, if money is involved, I think choosing those versions is still the wise decision, as you waste consedirably less time.
I'm not with Novell by the way  ;-)


B.
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to