Sigh...never mind. node2 was in standby (not sure how that happened)...pcs
node unstandby node2 and now it's working.
---
Regards,
Kevin Martin
On Tue, Oct 12, 2021 at 3:43 PM kevin martin wrote:
> Ok, so I'm doing more wrong than I thought. I did a "pcs cluster stop
> node1" on the
Ok, so I'm doing more wrong than I thought. I did a "pcs cluster stop
node1" on the main node expecting it would roll over the virtual ip to
node2, no joy. So "graceful" failover doesnt' work either. The actual
message is: (pcmk__native_allocate) info: Resource virtual_ip cannot run
anywhere
I'm trying to replace a 2 node cluster running on rhel6 with a 2 node
cluster on el8 using the version of pacemaker/corosync/pcsd that's in the
repos (pacemake 1.1.20, pcs 0-.9, corosync 2.4.3 on el6 and 2.0.5, 0.10,
and 3.1 on el8) and I must be doing something wrong. when I shutdown the
main
On Tue, 2021-10-12 at 20:48 +0300, Andrei Borzenkov wrote:
> On 12.10.2021 09:27, Ulrich Windl wrote:
> > > > > Andrei Borzenkov schrieb am 11.10.2021
> > > > > um 11:43 in
> > Nachricht
> > > >:
> > > On Mon, Oct 11, 2021 at 9:29 AM Ulrich Windl
> > > wrote:
> > >
> > > > > > Also how
On 12.10.2021 09:27, Ulrich Windl wrote:
Andrei Borzenkov schrieb am 11.10.2021 um 11:43 in
> Nachricht
> :
>> On Mon, Oct 11, 2021 at 9:29 AM Ulrich Windl
>> wrote:
>>
> Also how long would such a delay be: Long enough until the other node
> is
> fenced, or long enough
>>> Roger Zhou via Users schrieb am 12.10.2021 um 09:55
in
Nachricht :
...
>> # Time syncs can make the clock jump backward, which messes with logging
>> # and failure timestamps, so wait until it's done.
>> After=time‑sync.target
>> ...
>>
>> Oct 05 14:58:10 h16 pacemakerd[6974]: notice:
On Tue, 12 Oct 2021 09:46:04 +0200
"Ulrich Windl" wrote:
> >>> Jehan-Guillaume de Rorthais schrieb am 12.10.2021 um
> >>> 09:35 in
> Nachricht <20211012093554.4bb761a2@firost>:
> > On Tue, 12 Oct 2021 08:42:49 +0200
> > "Ulrich Windl" wrote:
> >
> ...
> >> "watch cat /proc/meminfo" could
On 10/12/21 3:32 PM, Ulrich Windl wrote:
Hi!
I just examined the corosync.service unit in SLES15. It contains:
# /usr/lib/systemd/system/corosync.service
[Unit]
Description=Corosync Cluster Engine
Documentation=man:corosync man:corosync.conf man:corosync_overview
On Tue, 12 Oct 2021 08:42:49 +0200
"Ulrich Windl" wrote:
> ...
> >> sysctl ‑a | grep dirty
> >> vm.dirty_background_bytes = 0
> >> vm.dirty_background_ratio = 10
> >
> > Considering your 256GB of physical memory, this means you can dirty up to
> > 25GB
> > pages in cache before the kernel
Hi!
I just examined the corosync.service unit in SLES15. It contains:
# /usr/lib/systemd/system/corosync.service
[Unit]
Description=Corosync Cluster Engine
Documentation=man:corosync man:corosync.conf man:corosync_overview
ConditionKernelCommandLine=!nocluster
Requires=network-online.target
>>> Jehan-Guillaume de Rorthais schrieb am 11.10.2021 um
11:57 in
Nachricht <2021105737.7cc99e69@firost>:
> Hi,
>
> I kept the full answer in history to keep the list informed of your full
> answer.
>
> My answer down below.
>
> On Mon, 11 Oct 2021 11:33:12 +0200
> damiano giuliani wrote:
>>> Andrei Borzenkov schrieb am 11.10.2021 um 11:43 in
Nachricht
:
> On Mon, Oct 11, 2021 at 9:29 AM Ulrich Windl
> wrote:
>
>> >> Also how long would such a delay be: Long enough until the other node
>> >> is
>> >> fenced, or long enough until the other node was fenced, booted
>> >>
12 matches
Mail list logo