ere would be how to achieve this, we do not have any hooks
in place for upgrade neither in ovirt nor in gluster.
@Gobinda Das Do you see any way to achieve this ?
Thanks,
Satheesaran Sundaramoorthi
___
Users mailing list -- users@ovirt.org
To unsubscribe
read-only
on the slave volume.
# gluster volume set features.read-only off
I do not think the documentation details available upstream.
@Ritesh Chikatwar Could you check, whether the
documentation is available upstream.
Thanks,
Satheesaran Sundaramoorthi
> On 28 May 2021, at 08:01, Simon
On Tue, Oct 1, 2019 at 8:09 PM Jayme wrote:
> Hello,
>
> I am running oVirt 4.3.6 and glusterd service and peers on all HCI nodes
> are connected and working properly after updating. I'm not sure if I
> understand what your question or issue is. It definitely should be safe to
> update your
On Tue, Oct 1, 2019 at 6:00 PM Jayme wrote:
> In an oVirt HCI gluster configuration you should be able to take down at
> least one host at a time. The procedure I use to upgrade my oVirt HCI
> cluster goes something like this:
>
> 1. Upgrade the oVirt engine setup packages. Upgrade the engine.
On Tue, Oct 1, 2019 at 11:27 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:
> Hi all,
>
> Sorry for asking again :/
>
> Is there any consensus on not using --emulate512 anymore while creating
> VDO volumes on Gluster?
> Since this parameter can not be changed once the volume
On Fri, Sep 27, 2019 at 4:13 PM Sandro Bonazzola
wrote:
> Il giorno ven 27 set 2019 alle ore 11:31 Rik Theys <
> rik.th...@esat.kuleuven.be> ha scritto:
>
>> Hi,
>>
>> After upgrading to 4.3.6, my storage domain can no longer be activated,
>> rendering my data center useless.
>>
> Hello Rik,
d the storage domain in ovirt now.
>>
>> I see network.remote-dio=enable is part of the gluster virt group, so
>> apparently it’s not getting set by ovirt duding the volume creation/optimze
>> for storage?
>>
>
> I'm not sure who is responsible for changing these se
Hi All,
I have created a converged setup with cluster having both virt and gluster
capability. There are three hosts in this cluster, and this cluster also
has enabled 'native access to gluster domain' which enables VM to use
libgfapi access mechanism.
With this setup, I see VMs created landing
On Sat, Jun 24, 2017 at 3:17 PM, Abi Askushi
wrote:
> Hi all,
>
> For the records, I had to remove manually the conflicting directory and ts
> respective gfid from the arbiter volume:
>
> getfattr -m . -d -e hex e1c80750-b880-495e-9609-b8bc7760d101/ha_agent
>
> That
9 matches
Mail list logo