On Fri, 2021-11-12 at 09:54 +0100, Sandro Bonazzola wrote:
> Il giorno ven 12 nov 2021 alle ore 09:50 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
> >
> >
> > Il giorno ven 12 nov 2021 alle ore 09:47 Sandro Bonazzola <
> > sbona...@redhat.com> ha scritto:
> >
> > >
> > >
> > > Il g
As I mentioned in the slack, the safest approach is to:
1. Reduce the volume to replica 1 (there is no need to keep the arbiter until
resynchronization
gluster volume remove-brick VOLUME replica 1
beclovkvma02.bec.net:/data/brick2/brick2
beclovkvma03.bec.net:/data/brick1/brick2
beclovkvma02.
Il giorno ven 12 nov 2021 alle ore 09:50 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:
>
>
> Il giorno ven 12 nov 2021 alle ore 09:47 Sandro Bonazzola <
> sbona...@redhat.com> ha scritto:
>
>>
>>
>> Il giorno mer 10 nov 2021 alle ore 15:45 Chris Adams
>> ha scritto:
>>
>>> I have seen vdsmd
Il giorno ven 12 nov 2021 alle ore 09:47 Sandro Bonazzola <
sbona...@redhat.com> ha scritto:
>
>
> Il giorno mer 10 nov 2021 alle ore 15:45 Chris Adams ha
> scritto:
>
>> I have seen vdsmd leak memory for years (I've been running oVirt since
>> version 3.5), but never been able to nail it down.
Il giorno mer 10 nov 2021 alle ore 15:45 Chris Adams ha
scritto:
> I have seen vdsmd leak memory for years (I've been running oVirt since
> version 3.5), but never been able to nail it down. I've upgraded a
> cluster to oVirt 4.4.9 (reloading the hosts with CentOS 8-stream), and I
> still see it
Il giorno gio 11 nov 2021 alle ore 23:19 David White <
dmwhite...@protonmail.com> ha scritto:
> Hi team,
> I saw that RHEL 8.5 was released yesterday, so I just put one of my hosts
> that doesn't have local gluster storage into maintenance mode and again
> attempted an update.
>
> The update again
6 matches
Mail list logo