I always used this one for triggering kdump when using
sbd:https://www.suse.com/support/kb/doc/?id=19873
On Fri, Feb 25, 2022 at 21:34, Reid Wahl wrote: On Fri,
Feb 25, 2022 at 3:47 AM Andrei Borzenkov wrote:
>
> On Fri, Feb 25, 2022 at 2:23 PM Reid Wahl wrote:
> >
> > On Fri, Feb
man votequorum
auto_tie_breaker: 1 allows you to have quorum with 50%, yet if for example
Aside (node with lowest id) dies, B side is 50% but won't be able to bring back
the resources as the node with lowest id is in A side.If you want to avoid
that, you can bring a qdevice on a VM in third
On Fri, Feb 25, 2022 at 3:47 AM Andrei Borzenkov wrote:
>
> On Fri, Feb 25, 2022 at 2:23 PM Reid Wahl wrote:
> >
> > On Fri, Feb 25, 2022 at 3:22 AM Reid Wahl wrote:
> > >
> ...
> > > >
> > > > So what happens most likely is that the watchdog terminates the kdump.
> > > > In that case all the
On Fri, Feb 25, 2022 at 4:31 AM Ulrich Windl
wrote:
>
> >>> Reid Wahl schrieb am 25.02.2022 um 12:31 in Nachricht
> :
> > On Thu, Feb 24, 2022 at 2:28 AM Ulrich Windl
> > wrote:
> >>
> >> Hi!
> >>
> >> I just discovered this oddity for a SLES15 SP3 cluster:
> >> Feb 24 11:16:17 h16
Hi,
Thank you so much for the answer. It seems to me that the one option I am
having is one big cluster with 4 nodes.
However, i still can not understand how i could solve the issue when one
site with 2 nodes is down, then the other site along does not have quorum
so it does not work...
Can you
>>> Reid Wahl schrieb am 25.02.2022 um 12:31 in Nachricht
:
> On Thu, Feb 24, 2022 at 2:28 AM Ulrich Windl
> wrote:
>>
>> Hi!
>>
>> I just discovered this oddity for a SLES15 SP3 cluster:
>> Feb 24 11:16:17 h16 pacemaker‑attrd[7274]: notice: Setting
val_net_gw1[h18]:
> 1000 ‑> 139000
>>
>>
On Fri, Feb 25, 2022 at 2:23 PM Reid Wahl wrote:
>
> On Fri, Feb 25, 2022 at 3:22 AM Reid Wahl wrote:
> >
...
> > >
> > > So what happens most likely is that the watchdog terminates the kdump.
> > > In that case all the mess with fence_kdump won't help, right?
> >
> > You can configure
On Fri, Feb 25, 2022 at 3:31 AM Reid Wahl wrote:
>
> On Thu, Feb 24, 2022 at 2:28 AM Ulrich Windl
> wrote:
> >
> > Hi!
> >
> > I just discovered this oddity for a SLES15 SP3 cluster:
> > Feb 24 11:16:17 h16 pacemaker-attrd[7274]: notice: Setting
> > val_net_gw1[h18]: 1000 -> 139000
> >
> >
On Thu, Feb 24, 2022 at 2:28 AM Ulrich Windl
wrote:
>
> Hi!
>
> I just discovered this oddity for a SLES15 SP3 cluster:
> Feb 24 11:16:17 h16 pacemaker-attrd[7274]: notice: Setting val_net_gw1[h18]:
> 1000 -> 139000
>
> That surprised me, because usually the value is 1000 or 0.
>
> Diggding a
On Fri, Feb 25, 2022 at 3:22 AM Reid Wahl wrote:
>
> On Thu, Feb 24, 2022 at 4:22 AM Ulrich Windl
> wrote:
> >
> > Hi!
> >
> > After reading about fence_kdump and fence_kdump_send I wonder:
> > Does anybody use that in production?
>
> Quite a lot of people, in fact.
>
> > Having the networking
On Thu, Feb 24, 2022 at 4:22 AM Ulrich Windl
wrote:
>
> Hi!
>
> After reading about fence_kdump and fence_kdump_send I wonder:
> Does anybody use that in production?
Quite a lot of people, in fact.
> Having the networking and bonding in initrd does not sound like a good idea
> to me.
>
On 2/24/22 20:21, Ulrich Windl wrote:
Hi!
After reading about fence_kdump and fence_kdump_send I wonder:
Does anybody use that in production?
Having the networking and bonding in initrd does not sound like a good idea to
me.
I assume one of motivation for fence_kdump is to reduce the
>>> Viet Nguyen schrieb am 24.02.2022 um 10:28 in
Nachricht
:
> Hi,
>
> Thank you so so much for your help. May i ask a following up question:
>
> For the option of having one big cluster with 4 nodes without booth, then,
> if one site (having 2 nodes) is down, then the other site does not work
>>> "Walker, Chris" schrieb am 24.02.2022 um 17:26
in
Nachricht
> We use the fence_kump* code extensively in production and have never had any
> problems with it (other than the normal initial configuration challenges).
> Kernel panic + kdump is our most common failure mode, so we exercise
14 matches
Mail list logo