I have gone in an changed the libvirt configuration files on the cluster
nodes, which has resolved the issue for the time being.
I can reverse one of them and post the logs to help with the issue,
hopefully tomorrow.
On 2019-06-14 17:56, Nir Soffer wrote:
> On Fri, Jun 14, 2019 at 7:05 PM Milan Zamazal <mzama...@redhat.com> wrote:
>
>> Alex McWhirter <a...@triadic.us> writes:
>>
>>> In this case, i should be able to edit /etc/libvirtd/qemu.conf on all
>>> the nodes to disable dynamic ownership as a temporary measure until
>>> this is patched for libgfapi?
>>
>> No, other devices might have permission problems in such a case.
>
> I wonder how libvirt can change the permissions for devices it does not know
> about?
>
> When using libgfapi, we pass libivrt:
> <disk name='vda' snapshot='external' type='network'>
> <source protocol='gluster'
> name='volume/11111111-1111-1111-1111-111111111111'
> type='network'>
> <host name="brick1.example.com [1]" port="49152"
> transport="tcp"/>
> <host name="brick2.example.com [2]" port="49153"
> transport="tcp"/>
> </source>
> </disk>
>
> So libvirt does not have the path to the file, and it cannot change the
> permissions.
>
> Alex, can you reproduce this flow and attach vdsm and engine logs from all
> hosts
> to the bug?
>
> Nir
>
>>> On 2019-06-13 10:37, Milan Zamazal wrote:
>>>> Shani Leviim <slev...@redhat.com> writes:
>>>>
>>>>> Hi,
>>>>> It seems that you hit this bug:
>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=1666795
>>>>>
>>>>> Adding +Milan Zamazal <mzama...@redhat.com>, Can you please confirm?
>>>>
>>>> There may still be problems when using GlusterFS with libgfapi:
>>>> https://bugzilla.redhat.com/1719789.
>>>>
>>>> What's your Vdsm version and which kind of storage do you use?
>>>>
>>>>> *Regards,*
>>>>>
>>>>> *Shani Leviim*
>>>>>
>>>>>
>>>>> On Thu, Jun 13, 2019 at 12:18 PM Alex McWhirter <a...@triadic.us>
>>>>> wrote:
>>>>>
>>>>>> after upgrading from 4.2 to 4.3, after a vm live migrates it's disk
>>>>>> images are become owned by root:root. Live migration succeeds and
>>>>>> the vm
>>>>>> stays up, but after shutting down the VM from this point, starting
>>>>>> it up
>>>>>> again will cause it to fail. At this point i have to go in and change
>>>>>> the permissions back to vdsm:kvm on the images, and the VM will boot
>>>>>> again.
>>>>>> _______________________________________________
>>>>>> Users mailing list -- users@ovirt.org
>>>>>> To unsubscribe send an email to users-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/TSWRTC2E7XZSGSLA7NC5YGP7BIWQKMM3/
>>>>>>
>> _______________________________________________
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/36Z6BB5NGYEEFMPRTDYKFJVVBPZFUCBL/
Links:
------
[1] http://brick1.example.com
[2] http://brick2.example.com
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YFAVZUB3CTAEADS54ZD73KTFSXXBM2FB/