Hi
I have a cluster with 2 nodes, with DRBD
I have configured a few resources in one group
If I migrate the group everything will be stopped correctly and started
on the other node
I can also migrate it back without problems.
Now if I put the Master node in standby or reboot the group is not
- Original Message -
From: Ian cl-3...@jusme.com
To: Clusterlabs (pacemaker) mailing list pacemaker@oss.clusterlabs.org
Sent: Monday, May 12, 2014 3:02:50 PM
Subject: [Pacemaker] Pacemaker unnecessarily (?) restarts a vm on active node
when other node brought out of standby
David Vossel wrote:
does setting resource-stickiness help?
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#s-resource-options
Thanks for the suggestion. Applied resource-stickiness=100 to the vm
resource, doesn't seem to have any effect (same
On 9 May 2014, at 6:09 pm, Denis Witt denis.w...@concepts-and-training.de
wrote:
Hi List,
since the newest update of corosync/pacemaker (running Debian Wheezy so
versions are 1.1.7-1
1.1.7 came out March 2012 and upstream is now testing 1.1.12... I wouldn't call
that new.
for pacemaker
cluster version, config and logs would be a good starting point.
probably best to use crm_report to create a support tarball.
On 14 May 2014, at 12:42 am, W Forum W wfor...@gmail.com wrote:
Hi
I have a cluster with 2 nodes, with DRBD
I have configured a few resources in one group
If I
On 9 May 2014, at 11:24 am, renayama19661...@ybb.ne.jp wrote:
Hi All,
We confirmed a problem when we performed clean up of the Master/Slave
resource in Pacemaker1.0.
When this problem occurs, probe processing is not carried out.
I registered the problem with Bugzilla.
*
Hi Andrew,
Thank you for comments.
Do you guys have any timeframe for moving away from 1.0.x?
The 1.1 series is over 4 years old now and quite usable :-)
There is really a (low) limit to how much effort I can put into support for
it.
We gradually move from Pacemaker1.0 to Pacemaker1.1,