Re: [ovirt-devel] make check on master fails due to UnicodeDecodeError

2018-04-11 Thread Shani Leviim
HI,



*Regards,*

*Shani Leviim*

On Tue, Apr 10, 2018 at 5:32 PM, Nir Soffer  wrote:

> On Tue, Apr 10, 2018 at 5:21 PM Shani Leviim  wrote:
>
>> Hi,
>>
>> Yes, I did clean the root directory but it didn't solve the issue.
>> I'm currently running the tests on fedora27, using python version 2.1.14.
>>
>> Thanks to Dan's help, it seems that we found the root cause:
>>
>> I had 2 pickle files under /var/cache/vdsm/schema: vdsm-api.pickle and
>> vdsm-events.pickle.
>> Removing them and re-running the tests using make check was successfully
>> completed.
>>
>
> How did you have cached schema under /var/run? This directory is owned by
> root.
> Are you running the tests as root?
>
​No, I'm running the tests over my laptop using my user.​

>
> This sounds like a bug in the code using the pickled schema. The pickled
> should not
> be used if the timestamp of the pickle do not match the timestamp of the
> source.
>

​There's a suspect that there's a different encoding for python 2 and
python 3.
While I checnged "with open(pickle_path) as f:"   to   "with
open(pickle_path,'rb') as f: " (I was inspiered by [1]),
the make check seems to complete successfully.
​

​[1]
https://stackoverflow.com/questions/28218466/unpickling-a-python-2-object-with-python-3
​


> Also in make check, we should not use host schema cache, but local schema
> cache
> generated by running "make".
>
> Nir
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-11 Thread Alona Kaplan
Hi Ravi,

Added comments to the patch.

Regarding the lock - the lock shouldn't be released until the command and
its callbacks were finished. The treatment of a delayed failure should be
under lock since it doesn't make sense to take care of the failure while
other monitoring process is possibly running.

Besides the locking issue, IMO the main problem is the delayed failures.
In case the vdsm is down, why is there an immediate exception and a delayed
one? The delayed one is redundant. Anyway, 'callback.onFailure' shouldn't
be executed twice.

Thanks,

Alona.


On Wed, Apr 11, 2018 at 1:04 AM, Ravi Shankar Nori  wrote:

> This [1] should fix the multiple release lock issue
>
> [1] https://gerrit.ovirt.org/#/c/90077/
>
> On Tue, Apr 10, 2018 at 3:53 PM, Ravi Shankar Nori 
> wrote:
>
>> Working on a patch will post a fix
>>
>> Thanks
>>
>> Ravi
>>
>> On Tue, Apr 10, 2018 at 9:14 AM, Alona Kaplan 
>> wrote:
>>
>>> Hi all,
>>>
>>> Looking at the log it seems that the new GetCapabilitiesAsync is
>>> responsible for the mess.
>>>
>>> -
>>> * 08:29:47 - engine loses connectivity to host 
>>> 'lago-basic-suite-4-2-host-0'.*
>>>
>>>
>>>
>>> *- Every 3 seconds a getCapabalititiesAsync request is sent to the host 
>>> (unsuccessfully).*
>>>
>>>  * before each "getCapabilitiesAsync" the monitoring lock is taken 
>>> (VdsManager,refreshImpl)
>>>
>>>  * "getCapabilitiesAsync" immediately fails and throws 
>>> 'VDSNetworkException: java.net.ConnectException: Connection refused'. The 
>>> exception is caught by 
>>> 'GetCapabilitiesAsyncVDSCommand.executeVdsBrokerCommand' which calls 
>>> 'onFailure' of the callback and re-throws the exception.
>>>
>>>  catch (Throwable t) {
>>> getParameters().getCallback().onFailure(t);
>>> throw t;
>>>  }
>>>
>>> * The 'onFailure' of the callback releases the "monitoringLock" 
>>> ('postProcessRefresh()->afterRefreshTreatment()-> if (!succeeded) 
>>> lockManager.releaseLock(monitoringLock);')
>>>
>>> * 'VdsManager,refreshImpl' catches the network exception, marks 
>>> 'releaseLock = true' and *tries to release the already released lock*.
>>>
>>>   The following warning is printed to the log -
>>>
>>>   WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Trying to release 
>>> exclusive lock which does not exist, lock key: 
>>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>>
>>>
>>>
>>>
>>> *- 08:30:51 a successful getCapabilitiesAsync is sent.*
>>>
>>>
>>> *- 08:32:55 - The failing test starts (Setup Networks for setting ipv6).
>>> *
>>>
>>> * SetupNetworks takes the monitoring lock.
>>>
>>> *- 08:33:00 - ResponseTracker cleans the getCapabilitiesAsync requests from 
>>> 4 minutes ago from its queue and prints a VDSNetworkException: Vds timeout 
>>> occured.*
>>>
>>>   * When the first request is removed from the queue 
>>> ('ResponseTracker.remove()'), the
>>> *'Callback.onFailure' is invoked (for the second time) -> monitoring lock 
>>> is released (the lock taken by the SetupNetworks!).*
>>>
>>>   * *The other requests removed from the queue also try to release the 
>>> monitoring lock*, but there is nothing to release.
>>>
>>>   * The following warning log is printed -
>>> WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Trying to release 
>>> exclusive lock which does not exist, lock key: 
>>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>>
>>> - *08:33:00 - SetupNetwork fails on Timeout ~4 seconds after is started*. 
>>> Why? I'm not 100% sure but I guess the late processing of the 
>>> 'getCapabilitiesAsync' that causes losing of the monitoring lock and the 
>>> late + mupltiple processing of failure is root cause.
>>>
>>>
>>> Ravi, 'getCapabilitiesAsync' failure is treated twice and the lock is 
>>> trying to be released three times. Please share your opinion regarding how 
>>> it should be fixed.
>>>
>>>
>>> Thanks,
>>>
>>> Alona.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Apr 8, 2018 at 1:21 PM, Dan Kenigsberg 
>>> wrote:
>>>
 On Sun, Apr 8, 2018 at 9:21 AM, Edward Haas  wrote:

>
>
> On Sun, Apr 8, 2018 at 9:15 AM, Eyal Edri  wrote:
>
>> Was already done by Yaniv - https://gerrit.ovirt.org/#/c/89851.
>> Is it still failing?
>>
>> On Sun, Apr 8, 2018 at 8:59 AM, Barak Korren 
>> wrote:
>>
>>> On 7 April 2018 at 00:30, Dan Kenigsberg  wrote:
>>> > No, I am afraid that we have not managed to understand why setting
>>> and
>>> > ipv6 address too the host off the grid. We shall continue
>>> researching
>>> > this next week.
>>> >
>>> > Edy, https://gerrit.ovirt.org/#/c/88637/ is already 4 weeks old,
>>> but
>>> > could it possibly be related (I really doubt that)?
>>> >
>>>
>>
> Sorry, but I do not see how this problem is related to VDSM.
> There is

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Eyal Edri
Please make sure to run as much OST suites on this patch as possible before
merging ( using 'ci please build' )

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
> mpolednik
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:

> Please make sure to run as much OST suites on this patch as possible
> before merging ( using 'ci please build' )
>

But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
> wrote:
>
>> Hey,
>>
>> I've created a patch[0] that is finally able to activate libvirt's
>> dynamic_ownership for VDSM while not negatively affecting
>> functionality of our storage code.
>>
>> That of course comes with quite a bit of code removal, mostly in the
>> area of host devices, hwrng and anything that touches devices; bunch
>> of test changes and one XML generation caveat (storage is handled by
>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>> level).
>>
>> Because of the scope of the patch, I welcome storage/virt/network
>> people to review the code and consider the implication this change has
>> on current/future features.
>>
>> [0] https://gerrit.ovirt.org/#/c/89830/
>>
>> mpolednik
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Eyal Edri
On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>
>> Please make sure to run as much OST suites on this patch as possible
>> before merging ( using 'ci please build' )
>>
>
> But note that OST is not a way to verify the patch.
>
> Such changes require testing with all storage types we support.
>

Well, we already have HE suite that runs on ISCSI, so at least we have
NFS+ISCSI on nested,
for real storage testing, you'll have to do it manually


>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>> wrote:
>>
>>> Hey,
>>>
>>> I've created a patch[0] that is finally able to activate libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in the
>>> area of host devices, hwrng and anything that touches devices; bunch
>>> of test changes and one XML generation caveat (storage is handled by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this change has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>> mpolednik
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R&D
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>> ___
>> Devel mailing list
>> Devel@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/devel
>
>


-- 

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-11 Thread Alona Kaplan
On Tue, Apr 10, 2018 at 6:52 PM, Gal Ben Haim  wrote:

> I'm seeing the same error in [1], during 006_migrations.migrate_vm.
>
> [1] http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1650/
>

Seems like another bug. The migration failed since for some reason the vm
is already defined on the destination host.

2018-04-10 11:08:08,685-0400 ERROR (jsonrpc/0) [api] FINISH create
error=Virtual machine already exists (api:129)
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 122, in
method
ret = func(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 191, in create
raise exception.VMExists()
VMExists: Virtual machine already exists




>
>
> On Tue, Apr 10, 2018 at 4:14 PM, Alona Kaplan  wrote:
>
>> Hi all,
>>
>> Looking at the log it seems that the new GetCapabilitiesAsync is
>> responsible for the mess.
>>
>> -
>> * 08:29:47 - engine loses connectivity to host 
>> 'lago-basic-suite-4-2-host-0'.*
>>
>>
>>
>> *- Every 3 seconds a getCapabalititiesAsync request is sent to the host 
>> (unsuccessfully).*
>>
>>  * before each "getCapabilitiesAsync" the monitoring lock is taken 
>> (VdsManager,refreshImpl)
>>
>>  * "getCapabilitiesAsync" immediately fails and throws 
>> 'VDSNetworkException: java.net.ConnectException: Connection refused'. The 
>> exception is caught by 
>> 'GetCapabilitiesAsyncVDSCommand.executeVdsBrokerCommand' which calls 
>> 'onFailure' of the callback and re-throws the exception.
>>
>>  catch (Throwable t) {
>> getParameters().getCallback().onFailure(t);
>> throw t;
>>  }
>>
>> * The 'onFailure' of the callback releases the "monitoringLock" 
>> ('postProcessRefresh()->afterRefreshTreatment()-> if (!succeeded) 
>> lockManager.releaseLock(monitoringLock);')
>>
>> * 'VdsManager,refreshImpl' catches the network exception, marks 
>> 'releaseLock = true' and *tries to release the already released lock*.
>>
>>   The following warning is printed to the log -
>>
>>   WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Trying to release 
>> exclusive lock which does not exist, lock key: 
>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>
>>
>>
>>
>> *- 08:30:51 a successful getCapabilitiesAsync is sent.*
>>
>>
>> *- 08:32:55 - The failing test starts (Setup Networks for setting ipv6).*
>>
>> * SetupNetworks takes the monitoring lock.
>>
>> *- 08:33:00 - ResponseTracker cleans the getCapabilitiesAsync requests from 
>> 4 minutes ago from its queue and prints a VDSNetworkException: Vds timeout 
>> occured.*
>>
>>   * When the first request is removed from the queue 
>> ('ResponseTracker.remove()'), the
>> *'Callback.onFailure' is invoked (for the second time) -> monitoring lock is 
>> released (the lock taken by the SetupNetworks!).*
>>
>>   * *The other requests removed from the queue also try to release the 
>> monitoring lock*, but there is nothing to release.
>>
>>   * The following warning log is printed -
>> WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>> (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Trying to release 
>> exclusive lock which does not exist, lock key: 
>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>
>> - *08:33:00 - SetupNetwork fails on Timeout ~4 seconds after is started*. 
>> Why? I'm not 100% sure but I guess the late processing of the 
>> 'getCapabilitiesAsync' that causes losing of the monitoring lock and the 
>> late + mupltiple processing of failure is root cause.
>>
>>
>> Ravi, 'getCapabilitiesAsync' failure is treated twice and the lock is trying 
>> to be released three times. Please share your opinion regarding how it 
>> should be fixed.
>>
>>
>> Thanks,
>>
>> Alona.
>>
>>
>>
>>
>>
>>
>> On Sun, Apr 8, 2018 at 1:21 PM, Dan Kenigsberg  wrote:
>>
>>> On Sun, Apr 8, 2018 at 9:21 AM, Edward Haas  wrote:
>>>


 On Sun, Apr 8, 2018 at 9:15 AM, Eyal Edri  wrote:

> Was already done by Yaniv - https://gerrit.ovirt.org/#/c/89851.
> Is it still failing?
>
> On Sun, Apr 8, 2018 at 8:59 AM, Barak Korren 
> wrote:
>
>> On 7 April 2018 at 00:30, Dan Kenigsberg  wrote:
>> > No, I am afraid that we have not managed to understand why setting
>> and
>> > ipv6 address too the host off the grid. We shall continue
>> researching
>> > this next week.
>> >
>> > Edy, https://gerrit.ovirt.org/#/c/88637/ is already 4 weeks old,
>> but
>> > could it possibly be related (I really doubt that)?
>> >
>>
>
 Sorry, but I do not see how this problem is related to VDSM.
 There is nothing that indicates that there is a VDSM problem.

 Has the RPC connection between Engine and VDSM failed?


>>> Further up the thread, Piotr noticed that (at least on one failure of
>>> this test) that the Vdsm host lost connectivity to its stor

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-11 Thread Arik Hadas
On Wed, Apr 11, 2018 at 12:45 PM, Alona Kaplan  wrote:

>
>
> On Tue, Apr 10, 2018 at 6:52 PM, Gal Ben Haim  wrote:
>
>> I'm seeing the same error in [1], during 006_migrations.migrate_vm.
>>
>> [1] http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1650/
>>
>
> Seems like another bug. The migration failed since for some reason the vm
> is already defined on the destination host.
>
> 2018-04-10 11:08:08,685-0400 ERROR (jsonrpc/0) [api] FINISH create
> error=Virtual machine already exists (api:129)
> Traceback (most recent call last):
> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 122, in
> method
> ret = func(*args, **kwargs)
> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 191, in create
> raise exception.VMExists()
> VMExists: Virtual machine already exists
>
>
Milan, Francesco, could it be that because of [1] that appears on the
destination host right after shutting down the VM, it remained defined on
that host?

[1] 2018-04-10 11:01:40,005-0400 ERROR (libvirt/events) [vds] Error running
VM callback (clientIF:683)

Traceback (most recent call last):

  File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 646, in
dispatchLibvirtEvents

v.onLibvirtLifecycleEvent(event, detail, None)

AttributeError: 'NoneType' object has no attribute 'onLibvirtLifecycleEvent'



>
>
>>
>>
>> On Tue, Apr 10, 2018 at 4:14 PM, Alona Kaplan 
>> wrote:
>>
>>> Hi all,
>>>
>>> Looking at the log it seems that the new GetCapabilitiesAsync is
>>> responsible for the mess.
>>>
>>> -
>>> * 08:29:47 - engine loses connectivity to host 
>>> 'lago-basic-suite-4-2-host-0'.*
>>>
>>>
>>>
>>> *- Every 3 seconds a getCapabalititiesAsync request is sent to the host 
>>> (unsuccessfully).*
>>>
>>>  * before each "getCapabilitiesAsync" the monitoring lock is taken 
>>> (VdsManager,refreshImpl)
>>>
>>>  * "getCapabilitiesAsync" immediately fails and throws 
>>> 'VDSNetworkException: java.net.ConnectException: Connection refused'. The 
>>> exception is caught by 
>>> 'GetCapabilitiesAsyncVDSCommand.executeVdsBrokerCommand' which calls 
>>> 'onFailure' of the callback and re-throws the exception.
>>>
>>>  catch (Throwable t) {
>>> getParameters().getCallback().onFailure(t);
>>> throw t;
>>>  }
>>>
>>> * The 'onFailure' of the callback releases the "monitoringLock" 
>>> ('postProcessRefresh()->afterRefreshTreatment()-> if (!succeeded) 
>>> lockManager.releaseLock(monitoringLock);')
>>>
>>> * 'VdsManager,refreshImpl' catches the network exception, marks 
>>> 'releaseLock = true' and *tries to release the already released lock*.
>>>
>>>   The following warning is printed to the log -
>>>
>>>   WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Trying to release 
>>> exclusive lock which does not exist, lock key: 
>>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>>
>>>
>>>
>>>
>>> *- 08:30:51 a successful getCapabilitiesAsync is sent.*
>>>
>>>
>>> *- 08:32:55 - The failing test starts (Setup Networks for setting ipv6).
>>> *
>>>
>>> * SetupNetworks takes the monitoring lock.
>>>
>>> *- 08:33:00 - ResponseTracker cleans the getCapabilitiesAsync requests from 
>>> 4 minutes ago from its queue and prints a VDSNetworkException: Vds timeout 
>>> occured.*
>>>
>>>   * When the first request is removed from the queue 
>>> ('ResponseTracker.remove()'), the
>>> *'Callback.onFailure' is invoked (for the second time) -> monitoring lock 
>>> is released (the lock taken by the SetupNetworks!).*
>>>
>>>   * *The other requests removed from the queue also try to release the 
>>> monitoring lock*, but there is nothing to release.
>>>
>>>   * The following warning log is printed -
>>> WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
>>> (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Trying to release 
>>> exclusive lock which does not exist, lock key: 
>>> 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'
>>>
>>> - *08:33:00 - SetupNetwork fails on Timeout ~4 seconds after is started*. 
>>> Why? I'm not 100% sure but I guess the late processing of the 
>>> 'getCapabilitiesAsync' that causes losing of the monitoring lock and the 
>>> late + mupltiple processing of failure is root cause.
>>>
>>>
>>> Ravi, 'getCapabilitiesAsync' failure is treated twice and the lock is 
>>> trying to be released three times. Please share your opinion regarding how 
>>> it should be fixed.
>>>
>>>
>>> Thanks,
>>>
>>> Alona.
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Sun, Apr 8, 2018 at 1:21 PM, Dan Kenigsberg 
>>> wrote:
>>>
 On Sun, Apr 8, 2018 at 9:21 AM, Edward Haas  wrote:

>
>
> On Sun, Apr 8, 2018 at 9:15 AM, Eyal Edri  wrote:
>
>> Was already done by Yaniv - https://gerrit.ovirt.org/#/c/89851.
>> Is it still failing?
>>
>> On Sun, Apr 8, 2018 at 8:59 AM, Barak Korren 
>> wrote:
>>
>>> On 7 April 2018 at 00:30, Dan

Re: [ovirt-devel] make check on master fails due to UnicodeDecodeError

2018-04-11 Thread Shani Leviim
A patch was uploaded: https://gerrit.ovirt.org/#/c/90093/


*Regards,*

*Shani Leviim*

On Wed, Apr 11, 2018 at 9:59 AM, Shani Leviim  wrote:

> HI,
>
>
>
> *Regards,*
>
> *Shani Leviim*
>
> On Tue, Apr 10, 2018 at 5:32 PM, Nir Soffer  wrote:
>
>> On Tue, Apr 10, 2018 at 5:21 PM Shani Leviim  wrote:
>>
>>> Hi,
>>>
>>> Yes, I did clean the root directory but it didn't solve the issue.
>>> I'm currently running the tests on fedora27, using python version 2.1.14.
>>>
>>> Thanks to Dan's help, it seems that we found the root cause:
>>>
>>> I had 2 pickle files under /var/cache/vdsm/schema: vdsm-api.pickle and
>>> vdsm-events.pickle.
>>> Removing them and re-running the tests using make check was successfully
>>> completed.
>>>
>>
>> How did you have cached schema under /var/run? This directory is owned by
>> root.
>> Are you running the tests as root?
>>
> ​No, I'm running the tests over my laptop using my user.​
>
>>
>> This sounds like a bug in the code using the pickled schema. The pickled
>> should not
>> be used if the timestamp of the pickle do not match the timestamp of the
>> source.
>>
>
> ​There's a suspect that there's a different encoding for python 2 and
> python 3.
> While I checnged "with open(pickle_path) as f:"   to   "with
> open(pickle_path,'rb') as f: " (I was inspiered by [1]),
> the make check seems to complete successfully.
> ​
>
> ​[1] https://stackoverflow.com/questions/28218466/unpickling-
> a-python-2-object-with-python-3​
>
>
>> Also in make check, we should not use host schema cache, but local schema
>> cache
>> generated by running "make".
>>
>> Nir
>>
>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:

> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>
>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>
>>> Please make sure to run as much OST suites on this patch as possible
>>> before merging ( using 'ci please build' )
>>>
>>
>> But note that OST is not a way to verify the patch.
>>
>> Such changes require testing with all storage types we support.
>>
>
> Well, we already have HE suite that runs on ISCSI, so at least we have
> NFS+ISCSI on nested,
> for real storage testing, you'll have to do it manually
>

We need glusterfs (both native and fuse based), and cinder/ceph storage.

But we cannot practically test all flows with all types of storage for
every patch.

Nir


>
>
>>
>> Nir
>>
>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>>> wrote:
>>>
 Hey,

 I've created a patch[0] that is finally able to activate libvirt's
 dynamic_ownership for VDSM while not negatively affecting
 functionality of our storage code.

 That of course comes with quite a bit of code removal, mostly in the
 area of host devices, hwrng and anything that touches devices; bunch
 of test changes and one XML generation caveat (storage is handled by
 VDSM, therefore disk relabelling needs to be disabled on the VDSM
 level).

 Because of the scope of the patch, I welcome storage/virt/network
 people to review the code and consider the implication this change has
 on current/future features.

 [0] https://gerrit.ovirt.org/#/c/89830/

 mpolednik
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel

>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>>
>>> MANAGER
>>>
>>> RHV DevOps
>>>
>>> EMEA VIRTUALIZATION R&D
>>>
>>>
>>> Red Hat EMEA 
>>>  TRIED. TESTED. TRUSTED.
>>> 
>>> phone: +972-9-7692018 <+972%209-769-2018>
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>> ___
>>> Devel mailing list
>>> Devel@ovirt.org
>>> http://lists.ovirt.org/mailman/listinfo/devel
>>
>>
>
>
> --
>
> Eyal edri
>
>
> MANAGER
>
> RHV DevOps
>
> EMEA VIRTUALIZATION R&D
>
>
> Red Hat EMEA 
>  TRIED. TESTED. TRUSTED. 
> phone: +972-9-7692018 <+972%209-769-2018>
> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Martin Polednik

On 11/04/18 12:27 +, Nir Soffer wrote:

On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:


On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible
before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.



Well, we already have HE suite that runs on ISCSI, so at least we have
NFS+ISCSI on nested,
for real storage testing, you'll have to do it manually



We need glusterfs (both native and fuse based), and cinder/ceph storage.

But we cannot practically test all flows with all types of storage for
every patch.


That leads to a question - how do I go around verifying such patch
without sufficient environment? Is there someone from storage QA that
could assist with this?


Nir







Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 

wrote:


Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/

mpolednik
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel





--

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA 
 TRIED. TESTED. TRUSTED.

phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel






--

Eyal edri


MANAGER

RHV DevOps

EMEA VIRTUALIZATION R&D


Red Hat EMEA 
 TRIED. TESTED. TRUSTED. 
phone: +972-9-7692018 <+972%209-769-2018>
irc: eedri (on #tlv #rhev-dev #rhev-integ)


___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Nir Soffer
On Wed, Apr 11, 2018 at 3:30 PM Martin Polednik 
wrote:

> On 11/04/18 12:27 +, Nir Soffer wrote:
> >On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:
> >
> >> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer 
> wrote:
> >>
> >>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
> >>>
>  Please make sure to run as much OST suites on this patch as possible
>  before merging ( using 'ci please build' )
> 
> >>>
> >>> But note that OST is not a way to verify the patch.
> >>>
> >>> Such changes require testing with all storage types we support.
> >>>
> >>
> >> Well, we already have HE suite that runs on ISCSI, so at least we have
> >> NFS+ISCSI on nested,
> >> for real storage testing, you'll have to do it manually
> >>
> >
> >We need glusterfs (both native and fuse based), and cinder/ceph storage.
> >
> >But we cannot practically test all flows with all types of storage for
> >every patch.
>
> That leads to a question - how do I go around verifying such patch
> without sufficient environment? Is there someone from storage QA that
> could assist with this?
>

Good question!

I hope Denis can help with verifying the glusterfs changes.

With cinder/ceph, maybe Elad can provide a setup for testing, or run some
automation tests on the patch?

Elad also have other automated tests for NFS/iSCSI that are worth running
before we merge such changes.

Nir


>
> >Nir
> >
> >
> >>
> >>
> >>>
> >>> Nir
> >>>
> >>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik  >
>  wrote:
> 
> > Hey,
> >
> > I've created a patch[0] that is finally able to activate libvirt's
> > dynamic_ownership for VDSM while not negatively affecting
> > functionality of our storage code.
> >
> > That of course comes with quite a bit of code removal, mostly in the
> > area of host devices, hwrng and anything that touches devices; bunch
> > of test changes and one XML generation caveat (storage is handled by
> > VDSM, therefore disk relabelling needs to be disabled on the VDSM
> > level).
> >
> > Because of the scope of the patch, I welcome storage/virt/network
> > people to review the code and consider the implication this change
> has
> > on current/future features.
> >
> > [0] https://gerrit.ovirt.org/#/c/89830/
> >
> > mpolednik
> > ___
> > Devel mailing list
> > Devel@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/devel
> >
> 
> 
> 
>  --
> 
>  Eyal edri
> 
> 
>  MANAGER
> 
>  RHV DevOps
> 
>  EMEA VIRTUALIZATION R&D
> 
> 
>  Red Hat EMEA 
>   TRIED. TESTED. TRUSTED.
>  
>  phone: +972-9-7692018 <+972%209-769-2018> <+972%209-769-2018>
>  irc: eedri (on #tlv #rhev-dev #rhev-integ)
>  ___
>  Devel mailing list
>  Devel@ovirt.org
>  http://lists.ovirt.org/mailman/listinfo/devel
> >>>
> >>>
> >>
> >>
> >> --
> >>
> >> Eyal edri
> >>
> >>
> >> MANAGER
> >>
> >> RHV DevOps
> >>
> >> EMEA VIRTUALIZATION R&D
> >>
> >>
> >> Red Hat EMEA 
> >>  TRIED. TESTED. TRUSTED. <
> https://redhat.com/trusted>
> >> phone: +972-9-7692018 <+972%209-769-2018> <+972%209-769-2018>
> >> irc: eedri (on #tlv #rhev-dev #rhev-integ)
> >>
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] [ OST Failure Report ] [ oVirt 4.2 ] [ 2018-04-04 ] [006_migrations.prepare_migration_attachments_ipv6]

2018-04-11 Thread Milan Zamazal
Arik Hadas  writes:

> On Wed, Apr 11, 2018 at 12:45 PM, Alona Kaplan  wrote:
>
>>
>>
>> On Tue, Apr 10, 2018 at 6:52 PM, Gal Ben Haim  wrote:
>>
>>> I'm seeing the same error in [1], during 006_migrations.migrate_vm.
>>>
>>> [1] http://jenkins.ovirt.org/job/ovirt-4.2_change-queue-tester/1650/
>>>
>>
>> Seems like another bug. The migration failed since for some reason the vm
>> is already defined on the destination host.
>>
>> 2018-04-10 11:08:08,685-0400 ERROR (jsonrpc/0) [api] FINISH create
>> error=Virtual machine already exists (api:129)
>> Traceback (most recent call last):
>> File "/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 122, in
>> method
>> ret = func(*args, **kwargs)
>> File "/usr/lib/python2.7/site-packages/vdsm/API.py", line 191, in create
>> raise exception.VMExists()
>> VMExists: Virtual machine already exists
>>
>>
> Milan, Francesco, could it be that because of [1] that appears on the
> destination host right after shutting down the VM, it remained defined on
> that host?

I can't see any destroy call in the logs after the successful preceding
migration from the given host.  That would explain “VMExists” error.

> [1] 2018-04-10 11:01:40,005-0400 ERROR (libvirt/events) [vds] Error running
> VM callback (clientIF:683)
>
> Traceback (most recent call last):
>
>   File "/usr/lib/python2.7/site-packages/vdsm/clientIF.py", line 646, in
> dispatchLibvirtEvents
>
> v.onLibvirtLifecycleEvent(event, detail, None)
>
> AttributeError: 'NoneType' object has no attribute 'onLibvirtLifecycleEvent'

That means that a life cycle event on an unknown VM has arrived, in this
case apparently destroy event, following the destroy call after the
failed incoming migration.  The reported AttributeError is a minor bug,
already fixed in master.  So it's most likely unrelated to the discussed
problem.

>>> On Tue, Apr 10, 2018 at 4:14 PM, Alona Kaplan 
>>> wrote:
>>>
 Hi all,

 Looking at the log it seems that the new GetCapabilitiesAsync is
 responsible for the mess.

 -
 * 08:29:47 - engine loses connectivity to host 
 'lago-basic-suite-4-2-host-0'.*



 *- Every 3 seconds a getCapabalititiesAsync request is sent to the host 
 (unsuccessfully).*

  * before each "getCapabilitiesAsync" the monitoring lock is taken 
 (VdsManager,refreshImpl)

  * "getCapabilitiesAsync" immediately fails and throws 
 'VDSNetworkException: java.net.ConnectException: Connection refused'. The 
 exception is caught by 
 'GetCapabilitiesAsyncVDSCommand.executeVdsBrokerCommand' which calls 
 'onFailure' of the callback and re-throws the exception.

  catch (Throwable t) {
 getParameters().getCallback().onFailure(t);
 throw t;
  }

 * The 'onFailure' of the callback releases the "monitoringLock" 
 ('postProcessRefresh()->afterRefreshTreatment()-> if (!succeeded) 
 lockManager.releaseLock(monitoringLock);')

 * 'VdsManager,refreshImpl' catches the network exception, marks 
 'releaseLock = true' and *tries to release the already released lock*.

   The following warning is printed to the log -

   WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
 (EE-ManagedThreadFactory-engineScheduled-Thread-53) [] Trying to release 
 exclusive lock which does not exist, lock key: 
 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'




 *- 08:30:51 a successful getCapabilitiesAsync is sent.*


 *- 08:32:55 - The failing test starts (Setup Networks for setting ipv6).   
  *

 * SetupNetworks takes the monitoring lock.

 *- 08:33:00 - ResponseTracker cleans the getCapabilitiesAsync requests 
 from 4 minutes ago from its queue and prints a VDSNetworkException: Vds 
 timeout occured.*

   * When the first request is removed from the queue 
 ('ResponseTracker.remove()'), the
 *'Callback.onFailure' is invoked (for the second time) -> monitoring lock 
 is released (the lock taken by the SetupNetworks!).*

   * *The other requests removed from the queue also try to release the 
 monitoring lock*, but there is nothing to release.

   * The following warning log is printed -
 WARN  [org.ovirt.engine.core.bll.lock.InMemoryLockManager] 
 (EE-ManagedThreadFactory-engineScheduled-Thread-14) [] Trying to release 
 exclusive lock which does not exist, lock key: 
 'ecf53d69-eb68-4b11-8df2-c4aa4e19bd93VDS_INIT'

 - *08:33:00 - SetupNetwork fails on Timeout ~4 seconds after is started*. 
 Why? I'm not 100% sure but I guess the late processing of the 
 'getCapabilitiesAsync' that causes losing of the monitoring lock and the 
 late + mupltiple processing of failure is root cause.


 Ravi, 'getCapabilitiesAsync' failure is treated twice and the lock is 
 

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Yaniv Kaul
On Wed, Apr 11, 2018 at 3:27 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:38 PM Eyal Edri  wrote:
>
>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>>
>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>>
 Please make sure to run as much OST suites on this patch as possible
 before merging ( using 'ci please build' )

>>>
>>> But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>
>> Well, we already have HE suite that runs on ISCSI, so at least we have
>> NFS+ISCSI on nested,
>> for real storage testing, you'll have to do it manually
>>
>
> We need glusterfs (both native and fuse based), and cinder/ceph storage.
>

We have Gluster in o-s-t as well, as part of the HC suite. It doesn't use
Fuse though.


>
> But we cannot practically test all flows with all types of storage for
> every patch.
>

Indeed. But we could add easily do some, and we should at least execute the
minimal set that we are able to easily via o-s-t.
Y.

>
> Nir
>
>
>>
>>
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
 wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>
> mpolednik
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>



 --

 Eyal edri


 MANAGER

 RHV DevOps

 EMEA VIRTUALIZATION R&D


 Red Hat EMEA 
  TRIED. TESTED. TRUSTED.
 
 phone: +972-9-7692018 <+972%209-769-2018>
 irc: eedri (on #tlv #rhev-dev #rhev-integ)
 ___
 Devel mailing list
 Devel@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/devel
>>>
>>>
>>
>>
>> --
>>
>> Eyal edri
>>
>>
>> MANAGER
>>
>> RHV DevOps
>>
>> EMEA VIRTUALIZATION R&D
>>
>>
>> Red Hat EMEA 
>>  TRIED. TESTED. TRUSTED. 
>> phone: +972-9-7692018 <+972%209-769-2018>
>> irc: eedri (on #tlv #rhev-dev #rhev-integ)
>>
>
> ___
> Devel mailing list
> Devel@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/devel
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Dan Kenigsberg
On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:

> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>
>> Please make sure to run as much OST suites on this patch as possible
>> before merging ( using 'ci please build' )
>>
>
> But note that OST is not a way to verify the patch.
>
> Such changes require testing with all storage types we support.
>
> Nir
>
> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
>> wrote:
>>
>>> Hey,
>>>
>>> I've created a patch[0] that is finally able to activate libvirt's
>>> dynamic_ownership for VDSM while not negatively affecting
>>> functionality of our storage code.
>>>
>>> That of course comes with quite a bit of code removal, mostly in the
>>> area of host devices, hwrng and anything that touches devices; bunch
>>> of test changes and one XML generation caveat (storage is handled by
>>> VDSM, therefore disk relabelling needs to be disabled on the VDSM
>>> level).
>>>
>>> Because of the scope of the patch, I welcome storage/virt/network
>>> people to review the code and consider the implication this change has
>>> on current/future features.
>>>
>>> [0] https://gerrit.ovirt.org/#/c/89830/
>>>
>>
In particular:  dynamic_ownership was set to 0 prehistorically (as part of
https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
running as root, was not able to play properly with root-squash nfs mounts.

Have you attempted this use case?

I join to Nir's request to run this with storage QE.
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel

Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Martin Polednik

On 11/04/18 16:28 +0300, Dan Kenigsberg wrote:

On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:


On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:


Please make sure to run as much OST suites on this patch as possible
before merging ( using 'ci please build' )



But note that OST is not a way to verify the patch.

Such changes require testing with all storage types we support.

Nir

On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 

wrote:


Hey,

I've created a patch[0] that is finally able to activate libvirt's
dynamic_ownership for VDSM while not negatively affecting
functionality of our storage code.

That of course comes with quite a bit of code removal, mostly in the
area of host devices, hwrng and anything that touches devices; bunch
of test changes and one XML generation caveat (storage is handled by
VDSM, therefore disk relabelling needs to be disabled on the VDSM
level).

Because of the scope of the patch, I welcome storage/virt/network
people to review the code and consider the implication this change has
on current/future features.

[0] https://gerrit.ovirt.org/#/c/89830/




In particular:  dynamic_ownership was set to 0 prehistorically (as part of
https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
running as root, was not able to play properly with root-squash nfs mounts.

Have you attempted this use case?


I have not. Added this to my to-do list.

The important part to note about this patch (compared to my previous
attempts in the past) is that it explicitly disables dynamic_ownership
for FILE/BLOCK-backed disks. That means, unless `seclabel` is broken
on libivrt side, the behavior would be unchanged for storage.


I join to Nir's request to run this with storage QE.

___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel


Re: [ovirt-devel] dynamic ownership changes

2018-04-11 Thread Elad Ben Aharon
We can test this on iSCSI, NFS and GlusterFS. As for ceph and cinder, will
have to check, since usually, we don't execute our automation on them.

On Wed, Apr 11, 2018 at 4:38 PM, Raz Tamir  wrote:

> +Elad
>
> On Wed, Apr 11, 2018 at 4:28 PM, Dan Kenigsberg  wrote:
>
>> On Wed, Apr 11, 2018 at 12:34 PM, Nir Soffer  wrote:
>>
>>> On Wed, Apr 11, 2018 at 12:31 PM Eyal Edri  wrote:
>>>
 Please make sure to run as much OST suites on this patch as possible
 before merging ( using 'ci please build' )

>>>
>>> But note that OST is not a way to verify the patch.
>>>
>>> Such changes require testing with all storage types we support.
>>>
>>> Nir
>>>
>>> On Tue, Apr 10, 2018 at 4:09 PM, Martin Polednik 
 wrote:

> Hey,
>
> I've created a patch[0] that is finally able to activate libvirt's
> dynamic_ownership for VDSM while not negatively affecting
> functionality of our storage code.
>
> That of course comes with quite a bit of code removal, mostly in the
> area of host devices, hwrng and anything that touches devices; bunch
> of test changes and one XML generation caveat (storage is handled by
> VDSM, therefore disk relabelling needs to be disabled on the VDSM
> level).
>
> Because of the scope of the patch, I welcome storage/virt/network
> people to review the code and consider the implication this change has
> on current/future features.
>
> [0] https://gerrit.ovirt.org/#/c/89830/
>

>> In particular:  dynamic_ownership was set to 0 prehistorically (as part
>> of https://bugzilla.redhat.com/show_bug.cgi?id=554961 ) because libvirt,
>> running as root, was not able to play properly with root-squash nfs mounts.
>>
>> Have you attempted this use case?
>>
>> I join to Nir's request to run this with storage QE.
>>
>
>
>
> --
>
>
> Raz Tamir
> Manager, RHV QE
>
___
Devel mailing list
Devel@ovirt.org
http://lists.ovirt.org/mailman/listinfo/devel