Thanks Aswathi.
(My account had stopped working due to mail bounces, never seen that occur
on gmail accounts)
Ken,
Answers to your questions are below:
*1. Using force option*
A) During our testing we had observed that in some instances the resource
deletion would fail and that's why we added
blank response for thread to appear in mailbox..pls ignore
On Tue, May 23, 2017 at 4:21 AM, Ken Gaillot wrote:
> On 05/16/2017 04:34 AM, Anu Pillai wrote:
> > Hi,
> >
> > Please find attached debug logs for the stated problem as well as
> > crm_mon command outputs.
> > In
On 05/16/2017 04:34 AM, Anu Pillai wrote:
> Hi,
>
> Please find attached debug logs for the stated problem as well as
> crm_mon command outputs.
> In this case we are trying to remove/delete res3 and system/node
> (0005B94238BC) from the cluster.
>
> *_Test reproduction steps_*
>
> Current
On 05/19/2017 04:14 AM, Anu Pillai wrote:
> Hi Ken,
>
> Did you get any chance to go through the logs?
sorry, not yet
> Do you need any more details ?
>
> Regards,
> Aswathi
>
> On Tue, May 16, 2017 at 3:04 PM, Anu Pillai
>
On 05/15/2017 12:25 PM, Anu Pillai wrote:
> Hi Klaus,
>
> Please find attached cib.xml as well as corosync.conf.
Why wouldn't you keep placement-strategy with default
to keep things simple. You aren't using any load-balancing
anyway as far as I understood it.
Haven't used resource-stickiness=INF.
Hi Klaus,
Please find attached cib.xml as well as corosync.conf.
Regards,
Aswathi
On Mon, May 15, 2017 at 2:46 PM, Klaus Wenninger
wrote:
> On 05/15/2017 09:36 AM, Anu Pillai wrote:
> > Hi,
> >
> > We are running pacemaker cluster for managing our resources. We have 6
>
On 05/15/2017 09:36 AM, Anu Pillai wrote:
> Hi,
>
> We are running pacemaker cluster for managing our resources. We have 6
> system running 5 resources and one is acting as standby. We have a
> restriction that, only one resource can run in one node. But our
> observation is whenever we add or
Hi,
We are running pacemaker cluster for managing our resources. We have 6
system running 5 resources and one is acting as standby. We have a
restriction that, only one resource can run in one node. But our
observation is whenever we add or delete a resource from cluster all the
remaining