Thanks, I'll give a newer version a shot.
--
Sam Gardner
Trustwave | SMART SECURITY ON DEMAND
On 3/25/16, 3:37 PM, "Lars Ellenberg" wrote:
>On Fri, Mar 25, 2016 at 04:08:48PM +, Sam Gardner wrote:
>> On 3/25/16, 10:26 AM, "Lars Ellenberg"
On Fri, Mar 25, 2016 at 04:08:48PM +, Sam Gardner wrote:
> On 3/25/16, 10:26 AM, "Lars Ellenberg" wrote:
>
>
> >On Thu, Mar 24, 2016 at 09:01:18PM +, Sam Gardner wrote:
> >> I'm having some trouble on a few of my clusters in which the DRBD Slave
> >>resource
On 3/25/16, 10:26 AM, "Lars Ellenberg" wrote:
>On Thu, Mar 24, 2016 at 09:01:18PM +, Sam Gardner wrote:
>> I'm having some trouble on a few of my clusters in which the DRBD Slave
>>resource does not want to come up after a reboot until I manually run
>>resource
On Thu, Mar 24, 2016 at 09:01:18PM +, Sam Gardner wrote:
> I'm having some trouble on a few of my clusters in which the DRBD Slave
> resource does not want to come up after a reboot until I manually run
> resource cleanup.
Logs?
I mean, to get a failure count,
you have to have some
I am using failure-timeout on the DRBDSlave resource:
"Meta Attrs: failure-timeout=33s"
--
Sam Gardner
Trustwave | SMART SECURITY ON DEMAND
On 3/25/16, 10:00 AM, "emmanuel segura" wrote:
>If you don't want INFINITY after the node is begin rebooted, you can
>use failure
on-fail=restart doesn't appear to do anything - the DRBDSlave resource
failcount is still at INFINITY after the secondary node is rebooted:
Is there anything else that I've screwed up in the config somehow?
Migration threshold doesn't seem to have a ton of meaning in the sense of
a Slave
try to use on-fail for single resource.
2016-03-25 0:22 GMT+01:00 Adam Spiers :
> Sam Gardner wrote:
>> I'm having some trouble on a few of my clusters in which the DRBD Slave
>> resource does not want to come up after a reboot until I manually run
>>
Sam Gardner wrote:
> I'm having some trouble on a few of my clusters in which the DRBD Slave
> resource does not want to come up after a reboot until I manually run
> resource cleanup.
>
> Setting 'start-failure-is-fatal=false' as a global cluster property and a
>
I'm having some trouble on a few of my clusters in which the DRBD Slave
resource does not want to come up after a reboot until I manually run resource
cleanup.
Setting 'start-failure-is-fatal=false' as a global cluster property and a
failure-timeout works to resolve the issue, but I don't