, 2010 at 7:16 PM, Tim Serong tser...@novell.com wrote:
On 11/30/2010 at 10:11 AM, Alan Jones falanclus...@gmail.com wrote:
On Thu, Nov 25, 2010 at 6:32 AM, Tim Serong tser...@novell.com wrote:
Can you elaborate on why you want this particular behaviour? Maybe
there's some other way
On Thu, Nov 25, 2010 at 6:32 AM, Tim Serong tser...@novell.com wrote:
Can you elaborate on why you want this particular behaviour? Maybe
there's some other way to approach the problem?
I have explained the issue as clearly as I know how. The problem is fundamental
to the design of the policy
On Sat, Nov 20, 2010 at 1:05 AM, Andrew Beekhof and...@beekhof.net wrote:
Then -2 obviously isn't big enough is it.
I need a value between and not including -inf and -2 that will work.
All the values I've tried don't, so I'm open to suggestions.
Please read and understand:
at 11:18 PM, Andrew Beekhof and...@beekhof.net wrote:
On Fri, Nov 5, 2010 at 4:07 AM, Vadym Chepkov vchep...@gmail.com wrote:
On Nov 4, 2010, at 12:53 PM, Alan Jones wrote:
If I understand you correctly, the role of the second resource in the
colocation command was defaulting to that of the first
On Sat, Nov 13, 2010 at 3:20 AM, Andrew Beekhof and...@beekhof.net wrote:
On Fri, Nov 12, 2010 at 5:27 PM, Alan Jones falanclus...@gmail.com wrote:
On Thu, Nov 11, 2010 at 11:31 PM, Andrew Beekhof and...@beekhof.net wrote:
colocation X-Y -2: X Y
colocation Y-X -2: Y X
the second one
I have tried larger values. If you know of a value that *should*
work, please share it.
INFINITY
My understanding is that a colocation score of minus infinity will
prevent the resources from running on the same node, which in my
configuration would result in a loss of availability. The goal
On Thu, Nov 11, 2010 at 11:31 PM, Andrew Beekhof and...@beekhof.net wrote:
colocation X-Y -2: X Y
colocation Y-X -2: Y X
the second one is implied by the first and is therefore redundant
If only that were true!
What happens with the first rule is that other constraints that force
Y to a node
I've looked into the code more and added more logging, etc.
The pengine essentially walks the list of constraints, applying
weights, and then walks the list of resources and tallies the weights.
In my example, it ends up walking the resources backward, i.e. it
assigns a node to Y and then assigns
wrote:
On Nov 4, 2010, at 12:53 PM, Alan Jones wrote:
If I understand you correctly, the role of the second resource in the
colocation command was defaulting to that of the first Master which
is not defined or is untested for none-ms resources.
Unfortunately, after changed that line
If I understand you correctly, the role of the second resource in the
colocation command was defaulting to that of the first Master which
is not defined or is untested for none-ms resources.
Unfortunately, after changed that line to:
colocation mystateful-ms-loc inf: mystateful-ms:Master
This question should be on the openais list, however, I happen to know
the answer.
To get up and running quickly you can configure broadcast with the
version you have.
Corosync can distinguish separate clusters with the multicast address
and port that become payload to the messages.
The patch you
I running with Pacemaker 1.0.9.1 and Corosync 1.2.7.
I have a simple config below where colocation seems to have the opposite effect.
Note that if you force myprim's location then mystateful's Master will
colocate correctly.
The command I use to force is: location myprim-loc myprim -inf:
Hi,
Pacemaker 1.0.9.1, Corosync 1.2.7
I have a sane master/slave configuration that gives me normal looking
notify() calls when I standby each node in turn.
However, when I configure the master/slave on a group of three
resources, things look pretty strange.
Note that I get no post calls at all.
I'm trying to configure a simple resource that depends on a local clone.
The configuration is below.
For those familiar with the Veritas Cluster Server, I'm trying to get
something like permanent resources.
Unfortunately, the simple resource (foo) will not start until *both* bar
clones are up.
The
Has anyone configured pacemaker to simulate multiple nodes with multiple
process instances?
Ideally, I'd like to bind corosync daemons to different loopback IPs (e.g.
127.0.0.1, 127.0.0.2, etc) and somehow direct the pacemaker instances to
separate corosync processes.
Any thoughts or comments
in pacemaker's configure command line options.
Do you know the answer?
Alan
On Wed, Aug 4, 2010 at 1:18 AM, Andrew Beekhof and...@beekhof.net wrote:
On Tue, Aug 3, 2010 at 2:23 AM, Alan Jones falanclus...@gmail.com wrote:
I'd like to configure pacemaker to use corosync without the openais
package
I'd like to configure pacemaker to use corosync without the openais package.
We have our own custom Linux distro, so I'm trying to compile:
Pacemaker-1-0-Pacemaker-1.0.9.tar.bz2
Reusable-Cluster-Components-8286b46c91e3.tar.bz2
corosync-1.2.7.tar.gz
The relevent options seem to be:
- configure
On Wed, Apr 7, 2010 at 6:38 AM, Andrew Beekhof and...@beekhof.net wrote:
It seems there are only two configuration options for pacemaker as
started
by
corosync: use_logd which I've enabled and use_mgmtd which I don't
understand.
pacemaker also uses the logging options of corosync.conf
Hi,
I would like to set debugging levels higher than zero with
pacemaker/corosync.
[r...@fc12-a heartbeat]# ./crmd version
CRM Version: 1.0.5 (ee19d8e83c2a5d45988f1cee36d334a631d84fc7)
[r...@fc12-a heartbeat]# corosync -v
Corosync Cluster Engine, version '1.1.2' SVN revision '2539'
Copyright (c)
); \
cl_log(LOG_INFO, fmt, ##args); \
} while(0)
#endif
On Thu, Apr 1, 2010 at 4:03 PM, Alan Jones falanclus...@gmail.com wrote:
Hi,
I would like to set debugging levels higher than zero with
pacemaker/corosync.
[r
Friends,
The ocf:pacemaker:Dummy example resource agent script specifies a default
monitoring interval (10)
which I assume is 10 seconds. This seems like the appropriate place to
specify this interval, ie.
the resource implementation knows how heavy weight the monitor is and what
is a good
to overcome the
negative colocation value to allow them both to run on one node.
If there is a more elegant solution, let me know.
Alan
On Tue, Mar 23, 2010 at 8:24 AM, Andrew Beekhof and...@beekhof.net wrote:
On Mon, Mar 22, 2010 at 9:18 PM, Alan Jones falanclus...@gmail.com
wrote:
Well, I guess
Friends,
I have what should be a simple goal. Two resources to run on two nodes.
I'd like to configure them to run on separate nodes when available, ie.
active-active,
and provide for them to run together on either node when one fails, ie.
failover.
Up until this point I have assumed that this
Is there any interest among people working with Pacemaker to provide for
restarting crmd locally without failover and rediscovering resouce agent
states through their monitor scripts?
Alan
___
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
On Wed, Mar 17, 2010 at 1:39 PM, Andrew Beekhof and...@beekhof.net wrote:
On Wed, Mar 17, 2010 at 7:23 PM, Alan Jones falanclus...@gmail.com
wrote:
Is there any interest among people working with Pacemaker to provide for
restarting crmd locally without failover and rediscovering resouce
I'm trying to follow the code in lib/ais/plugin.c
In many functions the first argument conn is assigned to a local
async_conn which is never modified, e.g.:
void pcmk_notify(void *conn, ais_void_ptr *msg)
{
const AIS_Message *ais_msg = msg;
char *data = get_ais_data(ais_msg);
void
It appears from the code in lib/ais/plugin.c:pcmk_peer_update() that
Pacemaker ignores
transitional membership updates from Corosync. It is my understanding that
this information
tells you which members have maintained synchronized state during
transitions. For example,
view AB on both A and B
It appears from the code in lib/ais/plugin.c:pcmk_peer_update() that
Pacemaker ignores
transitional membership updates from Corosync. It is my understanding that
this information
tells you which members have maintained synchronized state during
transitions. For example,
view AB on both A and B
Hi,
I'm trying to use the following software together:
cluster-glue-1.0.2-1.fc13.src.rpm
corosync-1.2.0-1.fc13.src.rpm
openais-1.1.2-1.fc13.src.rpm
pacemaker-1.0.5-5.fc13.src.rpm
I'm having trouble with crmd as I wrote earlier:
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: lrm_signon: can not
srwxrwxrwx 1 hacluster root 0 Feb 5 11:21 pengine-bash-3.00
# ls -l /var/run
drwxr-x--- 2 hacluster haclient 4096 Feb 5 11:21 crm
drwxr-xr-x 2 hacluster haclient 4096 Feb 5 10:48 heartbeat
On Fri, Feb 5, 2010 at 9:53 AM, Alan Jones falanclus...@gmail.com wrote:
Hi,
I'm trying to use
The answer is to use configure options to get the different projects to
agree where var is.
Alan
On Fri, Feb 5, 2010 at 11:28 AM, Alan Jones falanclus...@gmail.com wrote:
Ok, hb_report only collects existing logs so it won't help me *get* crmd to
use the logd.
However, I am making progress
I'm trying to run with corosync 1.2.0 and pacemaker 1.0.5 and get the
following repeatedly in /var/log/messages:
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: lrm_signon: can not initiate
connection
Feb 4 17:57:42 dd690-42 crmd: [1910]: WARN: do_lrm_control: Failed to sign
on to the LRM 1 (30
32 matches
Mail list logo