Dear Sandro, Nir,
usually I avoid the test repos for 2 reasons:1. I had a bad experience getting
away from the RHEL 7.5 Beta to 7.5 Standard repos ,so now I prefer to update
only when patches are in standard repo2. My Lab is a kind of a test environment
, but I prefer to be able to spin up a VM
On Fri, Mar 15, 2019, 15:16 Sandro Bonazzola
>
> Il giorno ven 15 mar 2019 alle ore 14:00 Simon Coter <
> simon.co...@oracle.com> ha scritto:
>
>> Hi,
>>
>> something that I’m seeing in the vdsm.log, that I think is gluster
>> related is the following message:
>>
>> 2019-03-15 05:58:28,980-0700 IN
Interesting this is the first time I’ve seen this bug posted. I’m still
having problems with bricks going down in my hci setup. It had been 1-2
bricks dropping 2-4 times per day on different volumes. However when all
bricks are up everything is working ok and all bricks are healed and
seemingly in
Upgrading gluster from version 3.12 or 4.1 (included in ovirt 3.x) to 5.3 (in
ovirt 4.3) seems to cause this due to a bug in the gluster upgrade process.
It’s an unfortunate side effect fo us upgrading ovirt hyper-converged systems.
Installing new should be fine, but I’d wait for gluster to get
Hi,
something that I’m seeing in the vdsm.log, that I think is gluster related is
the following message:
2019-03-15 05:58:28,980-0700 INFO (jsonrpc/6) [root] managedvolume not
supported: Managed Volume Not Supported. Missing package os-brick.: ('Cannot
import os_brick',) (caps:148)
os_brick
That is essentially the behaviour that I've seen. I wonder if perhaps it
could be related to the increased heal activity that occurs on the volumes
during reboots of nodes after updating.
On Fri, Mar 15, 2019 at 12:43 PM Ron Jerome wrote:
> Just FYI, I have observed similar issues where a volum
Just FYI, I have observed similar issues where a volume becomes unstable
for a period of time after the upgrade, but then seems to settle down after
a while. I've only witnessed this in the 4.3.x versions. I suspect it's
more of a Gluster issue than oVirt, but troubling none the less.
On Fri, 15
Yes that is correct. I don't know if the upgrade to 4.3.1 itself caused
issues or simply related somehow to rebooting all hosts again to apply node
updates started causing brick issues for me again. I started having similar
brick issues after upgrading to 4.3 originally that seemed to have
stabili
Il giorno ven 15 mar 2019 alle ore 14:00 Simon Coter
ha scritto:
> Hi,
>
> something that I’m seeing in the vdsm.log, that I think is gluster related
> is the following message:
>
> 2019-03-15 05:58:28,980-0700 INFO (jsonrpc/6) [root] managedvolume not
> supported: Managed Volume Not Supported.
Il giorno ven 15 mar 2019 alle ore 13:46 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:
>
> >I along with others had GlusterFS issues after 4.3 upgrades, the failed
> to dispatch handler issue with bricks going down intermittently. After
> some time it seemed to have corrected itself (at le
Il giorno ven 15 mar 2019 alle ore 13:38 Jayme ha
scritto:
> I along with others had GlusterFS issues after 4.3 upgrades, the failed to
> dispatch handler issue with bricks going down intermittently. After some
> time it seemed to have corrected itself (at least in my enviornment) and I
> hadn't
>I along with others had GlusterFS issues after 4.3 upgrades, the failed to
>dispatch handler issue with bricks going down intermittently. After some time
>it seemed to have corrected itself (at least in my enviornment) and I >hadn't
>had any brick problems in a while. I upgraded my three no
12 matches
Mail list logo