Hi Bernd,
As SLES 12 is in a such a support phase, I guess SUSE will provide fixes only
for SLES 15.
It will be best if you open them a case and ask them about that.
Best Regards,
Strahil Nikolov
На 19 август 2020 г. 17:29:32 GMT+03:00, "Lentes, Bernd"
написа:
>
>- On Aug 19, 2020, at
On Wed, 2020-08-19 at 16:29 +0200, Lentes, Bernd wrote:
> - On Aug 19, 2020, at 4:04 PM, kgaillot kgail...@redhat.com
> wrote:
> > > This appears to be a scheduler bug.
> >
> > Fix is in master branch and will land in 2.0.5 expected at end of
> > the
> > year
> >
> >
- On Aug 19, 2020, at 4:04 PM, kgaillot kgail...@redhat.com wrote:
>> This appears to be a scheduler bug.
>
> Fix is in master branch and will land in 2.0.5 expected at end of the
> year
>
> https://github.com/ClusterLabs/pacemaker/pull/2146
A principal question:
I have SLES 12 and i'm
- On Aug 18, 2020, at 7:30 PM, kgaillot kgail...@redhat.com wrote:
>> > I'm not sure, I'd have to see the pe input.
>>
>> You find it here:
>> https://hmgubox2.helmholtz-muenchen.de/index.php/s/WJGtodMZ9k7rN29
>
> This appears to be a scheduler bug.
>
> The scheduler considers a
On Tue, 2020-08-18 at 12:30 -0500, Ken Gaillot wrote:
> On Tue, 2020-08-18 at 16:47 +0200, Lentes, Bernd wrote:
> >
> > - On Aug 17, 2020, at 5:09 PM, kgaillot kgail...@redhat.com
> > wrote:
> >
> >
> > > > I checked all relevant pe-files in this time period.
> > > > This is what i found
On Tue, 2020-08-18 at 16:47 +0200, Lentes, Bernd wrote:
>
> - On Aug 17, 2020, at 5:09 PM, kgaillot kgail...@redhat.com
> wrote:
>
>
> > > I checked all relevant pe-files in this time period.
> > > This is what i found out (i just write the important entries):
>
>
> > > Executing cluster
- On Aug 17, 2020, at 5:09 PM, kgaillot kgail...@redhat.com wrote:
>> I checked all relevant pe-files in this time period.
>> This is what i found out (i just write the important entries):
>> Executing cluster transition:
>> * Resource action: vm_nextcloudstop on ha-idg-2
>> Revised
On Fri, 2020-08-14 at 20:37 +0200, Lentes, Bernd wrote:
> - On Aug 9, 2020, at 10:17 PM, Bernd Lentes
> bernd.len...@helmholtz-muenchen.de wrote:
>
>
> > > So this appears to be the problem. From these logs I would guess
> > > the
> > > successful stop on ha-idg-1 did not get written to the
On Fri, 2020-08-14 at 12:17 +0200, Lentes, Bernd wrote:
>
> - On Aug 10, 2020, at 11:59 PM, kgaillot kgail...@redhat.com
> wrote:
> > The most recent transition is aborted, but since all its actions
> > are
> > complete, the only effect is to trigger a new transition.
> >
> > We should
- On Aug 9, 2020, at 10:17 PM, Bernd Lentes
bernd.len...@helmholtz-muenchen.de wrote:
>> So this appears to be the problem. From these logs I would guess the
>> successful stop on ha-idg-1 did not get written to the CIB for some
>> reason. I'd look at the pe input from this transition on
- On Aug 10, 2020, at 11:59 PM, kgaillot kgail...@redhat.com wrote:
> The most recent transition is aborted, but since all its actions are
> complete, the only effect is to trigger a new transition.
>
> We should probably rephrase the log message. In fact, the whole
> "transition"
On Sun, 2020-08-09 at 22:17 +0200, Lentes, Bernd wrote:
>
> - Am 29. Jul 2020 um 18:53 schrieb kgaillot kgail...@redhat.com:
>
> > On Wed, 2020-07-29 at 17:26 +0200, Lentes, Bernd wrote:
> > > Hi,
> > >
> > > a few days ago one of my nodes was fenced and i don't know why,
> > > which
> > >
- Am 29. Jul 2020 um 18:53 schrieb kgaillot kgail...@redhat.com:
> On Wed, 2020-07-29 at 17:26 +0200, Lentes, Bernd wrote:
>> Hi,
>>
>> a few days ago one of my nodes was fenced and i don't know why, which
>> is something i really don't like.
>> What i did:
>> I put one node (ha-idg-1) in
- Am 29. Jul 2020 um 18:53 schrieb kgaillot kgail...@redhat.com:
> Since the ha-idg-2 is now shutting down, ha-idg-1 becomes DC.
The other way round.
>> Jul 20 17:05:33 [10690] ha-idg-2pengine: warning:
>> unpack_rsc_op_failure: Processing failed migrate_to of vm_nextcloud
>> on
On Wed, 2020-07-29 at 17:26 +0200, Lentes, Bernd wrote:
> Hi,
>
> a few days ago one of my nodes was fenced and i don't know why, which
> is something i really don't like.
> What i did:
> I put one node (ha-idg-1) in standby. The resources on it (most of
> all virtual domains) were migrated to
- Am 29. Jul 2020 um 17:26 schrieb Bernd Lentes
bernd.len...@helmholtz-muenchen.de:
Hi,
sorry, i missed:
OS: SLES 12 SP4
kernel: 4.12.14-95.32
pacmaker: pacemaker-1.1.19+20181105.ccd6b5b10-3.13.1.x86_64
Bernd
Helmholtz Zentrum München
Helmholtz Zentrum Muenchen
Deutsches
Hi,
a few days ago one of my nodes was fenced and i don't know why, which is
something i really don't like.
What i did:
I put one node (ha-idg-1) in standby. The resources on it (most of all virtual
domains) were migrated to ha-idg-2,
except one domain (vm_nextcloud). On ha-idg-2 a mountpoint
On Thu, 2019-10-10 at 17:22 +0200, Lentes, Bernd wrote:
> HI,
>
> i have a two node cluster running on SLES 12 SP4.
> I did some testing on it.
> I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a
> few minutes later because i made a mistake.
> ha-idg-2 was DC. ha-idg-1 made a
10.10.2019 18:22, Lentes, Bernd пишет:
> HI,
>
> i have a two node cluster running on SLES 12 SP4.
> I did some testing on it.
> I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a few
> minutes later because i made a mistake.
> ha-idg-2 was DC. ha-idg-1 made a fresh boot and i
HI,
i have a two node cluster running on SLES 12 SP4.
I did some testing on it.
I put one into standby (ha-idg-2), the other (ha-idg-1) got fenced a few
minutes later because i made a mistake.
ha-idg-2 was DC. ha-idg-1 made a fresh boot and i started corosync/pacemaker on
it.
It seems ha-idg-1
- Am 14. Aug 2019 um 19:07 schrieb kgaillot kgail...@redhat.com:
>> That's my setting:
>>
>> expected_votes: 2
>> two_node: 1
>> wait_for_all: 0
>>
>> no-quorum-policy=ignore
>>
>> I did that because i want be able to start the cluster although one
>> node has e.g. a hardware
On Wed, 2019-08-14 at 11:57 +0200, Lentes, Bernd wrote:
>
> - On Aug 13, 2019, at 1:19 AM, kgaillot kgail...@redhat.com
> wrote:
>
>
> >
> > The key messages are:
> >
> > Aug 09 17:43:27 [6326] ha-idg-1 crmd: info:
> > crm_timer_popped: Election
> > Trigger (I_DC_TIMEOUT) just
- On Aug 13, 2019, at 1:19 AM, kgaillot kgail...@redhat.com wrote:
>
> The key messages are:
>
> Aug 09 17:43:27 [6326] ha-idg-1 crmd: info: crm_timer_popped:
> Election
> Trigger (I_DC_TIMEOUT) just popped (2ms)
> Aug 09 17:43:27 [6326] ha-idg-1 crmd: warning:
- On Aug 13, 2019, at 3:34 PM, Matthias Ferdinand m...@14v.de wrote:
>> 17:26:35 crm node standby ha-idg1-
>
> if that is not a copy error (ha-idg1- vs. ha-idg-1), then ha-idg-1
> was not set to standby, and installing updates may have done some
> meddling with corosync/pacemaker (like
On Mon, Aug 12, 2019 at 04:09:48PM -0400, users-requ...@clusterlabs.org wrote:
> Date: Mon, 12 Aug 2019 18:09:24 +0200 (CEST)
> From: "Lentes, Bernd"
> To: Pacemaker ML
> Subject: [ClusterLabs] why is node fenced ?
> Message-ID:
> <546330844.1686419.15656261
- On Aug 12, 2019, at 7:47 PM, Chris Walker cwal...@cray.com wrote:
> When ha-idg-1 started Pacemaker around 17:43, it did not see ha-idg-2, for
> example,
>
> Aug 09 17:43:05 [6318] ha-idg-1 pacemakerd: info:
> pcmk_quorum_notification:
> Quorum retained | membership=1320 members=1
>
On Mon, 2019-08-12 at 18:09 +0200, Lentes, Bernd wrote:
> Hi,
>
> last Friday (9th of August) i had to install patches on my two-node
> cluster.
> I put one of the nodes (ha-idg-2) into standby (crm node standby ha-
> idg-2), patched it, rebooted,
> started the cluster (systemctl start
When ha-idg-1 started Pacemaker around 17:43, it did not see ha-idg-2, for
example,
Aug 09 17:43:05 [6318] ha-idg-1 pacemakerd: info: pcmk_quorum_notification:
Quorum retained | membership=1320 members=1
after ~20s (dc-deadtime parameter), ha-idg-2 is marked 'unclean' and STONITHed
as
Hi,
last Friday (9th of August) i had to install patches on my two-node cluster.
I put one of the nodes (ha-idg-2) into standby (crm node standby ha-idg-2),
patched it, rebooted,
started the cluster (systemctl start pacemaker) again, put the node again
online, everything fine.
Then i wanted
On 16/05/19 17:10 +0200, Lentes, Bernd wrote:
> my HA-Cluster with two nodes fenced one on 14th of may.
> ha-idg-1 has been the DC, ha-idg-2 was fenced.
> It happened around 11:30 am.
> The log from the fenced one isn't really informative:
>
> [...]
>
> Node restarts at 11:44 am.
> The DC is
Hi,
my HA-Cluster with two nodes fenced one on 14th of may.
ha-idg-1 has been the DC, ha-idg-2 was fenced.
It happened around 11:30 am.
The log from the fenced one isn't really informative:
==
2019-05-14T11:22:09.948980+02:00 ha-idg-2 liblogging-stdlog: -- MARK --
31 matches
Mail list logo