>>> Tony Stocker schrieb am 29.06.2020 um 19:20 in
Nachricht
:
> On Mon, Jun 29, 2020 at 11:08 AM Ulrich Windl
> wrote:
>>
>> You could construct a script that generates the commands needed, so it
would
>> be rather easy to handle.
>
> True. The initial population wouldn't be that burdensome. I
>>> Andrei Borzenkov schrieb am 29.06.2020 um 17:13 in
Nachricht :
> 29.06.2020 14:57, Ulrich Windl пишет:
> Klaus Wenninger schrieb am 29.06.2020 um 10:12 in
>> Nachricht
>>
>> [...]
>>> My mailer was confused by all this combinations of
>>> "Antw: Re: Antw:" anddidn't compose mails into a
NFS mounting ... This sounds the perrfect candidate for autofs or systemd's
'.automount'. Have you thought about systemd automounting your NFS ?
It will allow you to automatically mount on demand and umount based on
inactivity to prevent stale NFS mounts on network issues.
If you still wish
On Mon, 2020-06-29 at 13:20 -0400, Tony Stocker wrote:
> On Mon, Jun 29, 2020 at 11:08 AM Ulrich Windl
> wrote:
> >
> > You could construct a script that generates the commands needed, so
> > it would
> > be rather easy to handle.
>
> True. The initial population wouldn't be that burdensome. I w
29.06.2020 20:20, Tony Stocker пишет:
>
>>
>>
>> The most interesting part seems to be the question whow you define (and
>> detect) a failure that will cause a node switch.
>
> That is a VERY good question! How many mounts failed is the critical
> number when you have 130+? If a single one fails,
On Mon, Jun 29, 2020 at 11:08 AM Ulrich Windl
wrote:
>
> You could construct a script that generates the commands needed, so it would
> be rather easy to handle.
True. The initial population wouldn't be that burdensome. I was
thinking of later when my coworkers have to add/remove mounts. I,
hones
29.06.2020 14:57, Ulrich Windl пишет:
Klaus Wenninger schrieb am 29.06.2020 um 10:12 in
> Nachricht
>
> [...]
>> My mailer was confused by all this combinations of
>> "Antw: Re: Antw:" anddidn't compose mails into a
>> thread properly. Which is why I missed further
>> discussion where it was
>>> Tony Stocker schrieb am 29.06.2020 um 15:15 in
Nachricht
<31558_1593436561_5EF9E991_31558_332_1_CACLi31XRhAm41CczAS9yoHadd+y6ByBTt7YWXrF_
p4ffv1...@mail.gmail.com>:
> Hello
>
> We have a system which has become critical in nature and that
> management wants to be made into a high‑available pa
Hello
We have a system which has become critical in nature and that
management wants to be made into a high-available pair of servers. We
are building on CentOS-8 and using Pacemaker to accomplish this.
Without going into too much detail as to why it's being done, and to
avoid any comments/sugges
>>> Klaus Wenninger schrieb am 29.06.2020 um 10:12 in
Nachricht
[...]
> My mailer was confused by all this combinations of
> "Antw: Re: Antw:" anddidn't compose mails into a
> thread properly. Which is why I missed further
> discussion where it was definitely still about
> shared-storage and notw
On Mon, 29 Jun 2020 10:37:27 +0100
Christine Caulfield wrote:
> On 29/06/2020 10:27, Jehan-Guillaume de Rorthais wrote:
> > On Mon, 29 Jun 2020 09:27:00 +0100
> > Christine Caulfield wrote:
> >
> >> Is anyone (else) using this?
> >
> > I do: https://clusterlabs.github.io/PAF/
> >
> >> W
On 29/06/2020 10:27, Jehan-Guillaume de Rorthais wrote:
> On Mon, 29 Jun 2020 09:27:00 +0100
> Christine Caulfield wrote:
>
>> Is anyone (else) using this?
>
> I do: https://clusterlabs.github.io/PAF/
>
>> We publish the libqb man pages to clusterlabs.github.io/libqb but I
>> can't see any othe
On Mon, 29 Jun 2020 09:27:00 +0100
Christine Caulfield wrote:
> Is anyone (else) using this?
I do: https://clusterlabs.github.io/PAF/
> We publish the libqb man pages to clusterlabs.github.io/libqb but I
> can't see any other clusterlabs projects using it (just by adding, eg,
> /pacemaker to th
Is anyone (else) using this?
We publish the libqb man pages to clusterlabs.github.io/libqb but I
can't see any other clusterlabs projects using it (just by adding, eg,
/pacemaker to the hostname).
With libqb 2.0.1 having actual man pages installed with it - which seems
far more useful to me - I
On 6/29/20 10:12 AM, Klaus Wenninger wrote:
> On 6/29/20 9:56 AM, Klaus Wenninger wrote:
>> On 6/24/20 8:09 AM, Andrei Borzenkov wrote:
>>> Two node is what I almost exclusively deal with. It works reasonably
>>> well in one location where failures to perform fencing are rare and can
>>> be mitigat
On 6/29/20 9:56 AM, Klaus Wenninger wrote:
> On 6/24/20 8:09 AM, Andrei Borzenkov wrote:
>> Two node is what I almost exclusively deal with. It works reasonably
>> well in one location where failures to perform fencing are rare and can
>> be mitigated by two different fencing methods. Usually SBD i
On 6/24/20 8:09 AM, Andrei Borzenkov wrote:
> Two node is what I almost exclusively deal with. It works reasonably
> well in one location where failures to perform fencing are rare and can
> be mitigated by two different fencing methods. Usually SBD is reliable
> enough, as failure of shared storag
17 matches
Mail list logo