Hi,
 
i recompiled all packages, and now, it works...
 
But my /var/run/crm is also in the new installation empty, is this ok?
 
Gesendet: Montag, 19. Januar 2015 um 11:07 Uhr
Von: "Thomas Manninger" <dbgtmas...@gmx.at>
An: pacemaker@oss.clusterlabs.org
Betreff: Re: [Pacemaker] no nodes on both hosts
Hi,
 
now, i see, in "/var/run/crm" there are no socket- files, the directory is empty.
 
How can i debug the problem?
 
Gesendet: Montag, 19. Januar 2015 um 10:18 Uhr
Von: "Thomas Manninger" <dbgtmas...@gmx.at>
An: pacemaker@oss.clusterlabs.org
Betreff: Re: [Pacemaker] no nodes on both hosts
Hi,
 
in reinstalled in the same vm debian, and used the debian pacemaker & corosync packages, everything works fine.
Then, i recomilied the newest pacemaker & corosync packages with the same problem..
 
Jan 19 10:13:31 [24271] pacemaker2        cib:     info: cib_process_request:   Completed cib_modify operation for section nodes: OK (rc=0, origin=pacemaker2/crmd/3, version=0.0.0)
This line means, that the node is added in the cib.xml?
 
But there are no nodes:
root@pacemaker2:/var/lib/pacemaker/cib# cat cib.xml
<cib crm_feature_set="3.0.9" validate-with="pacemaker-2.0" epoch="0" num_updates="0" admin_epoch="0" cib-last-written="Mon Jan 19 10:13:30 2015">
  <configuration>
    <crm_config/>
    <nodes/>
    <resources/>
    <constraints/>
  </configuration>
</cib>
 
I also changed the permission of the cib folder to 777...
 
Someone can help me??
Thanks!
 
Gesendet: Freitag, 16. Januar 2015 um 16:51 Uhr
Von: "Thomas Manninger" <dbgtmas...@gmx.at>
An: pacemaker@oss.clusterlabs.org
Betreff: [Pacemaker] no nodes on both hosts
Hi,
 
i use debian 7.
 
At first, i use the standard packages of debian, and pacemaker works perfect.
 
Now, i compiled my own packages, because i need pacemaker_remote. Since i use my compiled version, pacemaker see no nodes!
 
corosync2 lists both hosts:
root@pacemaker1:/var/lib/pacemaker/cib# corosync-cmapctl  | grep members
runtime.totem.pg.mrp.srp.members.181614346.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.181614346.ip (str) = r(0) ip(10.211.55.10)
runtime.totem.pg.mrp.srp.members.181614346.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.181614346.status (str) = joined
runtime.totem.pg.mrp.srp.members.181614347.config_version (u64) = 0
runtime.totem.pg.mrp.srp.members.181614347.ip (str) = r(0) ip(10.211.55.11)
runtime.totem.pg.mrp.srp.members.181614347.join_count (u32) = 1
runtime.totem.pg.mrp.srp.members.181614347.status (str) = joined
 
root@pacemaker1:/var/lib/pacemaker/cib# crm_mon -1
Last updated: Fri Jan 16 16:49:10 2015
Last change: Fri Jan 16 16:05:15 2015
Current DC: NONE
0 Nodes configured
0 Resources configured
 
uname -n returns pacemaker1 / pacemaker2.
 
Logfile is attached.
 
corosync.conf:
totem {
    version: 2
    token: 5000
    # crypto_cipher and crypto_hash: Used for mutual node authentication.
    # If you choose to enable this, then do remember to create a shared
    # secret with "corosync-keygen".
    # enabling crypto_cipher, requires also enabling of crypto_hash.
    crypto_cipher: none
    crypto_hash: none
    # interface: define at least one interface to communicate
    # over. If you define more than one interface stanza, you must
    # also set rrp_mode.
    interface {
                # Rings must be consecutively numbered, starting at 0.
        ringnumber: 0
        # This is normally the *network* address of the
        # interface to bind to. This ensures that you can use
        # identical instances of this configuration file
        # across all your cluster nodes, without having to
        # modify this option.
        bindnetaddr: 10.211.55.10
        # However, if you have multiple physical network
        # interfaces configured for the same subnet, then the
        # network address alone is not sufficient to identify
        # the interface Corosync should bind to. In that case,
        # configure the *host* address of the interface
        # instead:
        # bindnetaddr: 192.168.1.1
        # When selecting a multicast address, consider RFC
        # 2365 (which, among other things, specifies that
        # 239.255.x.x addresses are left to the discretion of
        # the network administrator). Do not reuse multicast
        # addresses across multiple Corosync clusters sharing
        # the same network.
        mcastaddr: 239.255.1.1
        # Corosync uses the port you specify here for UDP
        # messaging, and also the immediately preceding
        # port. Thus if you set this to 5405, Corosync sends
        # messages over UDP ports 5405 and 5404.
        mcastport: 5405
        # Time-to-live for cluster communication packets. The
        # number of hops (routers) that this ring will allow
        # itself to pass. Note that multicast routing must be
        # specifically enabled on most network routers.
        ttl: 1
    }
}
logging {
    # Log the source file and line where messages are being
    # generated. When in doubt, leave off. Potentially useful for
    # debugging.
    fileline: off
    # Log to standard error. When in doubt, set to no. Useful when
    # running in the foreground (when invoking "corosync -f")
    to_stderr: no
    # Log to a log file. When set to "no", the "logfile" option
    # must not be set.
    to_logfile: yes
    logfile: /var/log/cluster/corosync.log
    # Log to the system log daemon. When in doubt, set to yes.
    to_syslog: no
    # Log debug messages (very verbose). When in doubt, leave off.
    debug: on
    # Log messages with time stamps. When in doubt, set to on
    # (unless you are only logging to syslog, where double
    # timestamps can be annoying).
    timestamp: on
    logger_subsys {
        subsys: QUORUM
        debug: off
    }
}
quorum {
    # Enable and configure quorum subsystem (default: off)
    # see also corosync.conf.5 and votequorum.5
    #provider: corosync_votequorum
}
 
Thanks!
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org
_______________________________________________
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org

Reply via email to