ovirt-engine change queue and ovit-system-tests check_patch also look good
with ovirt-engine rpm including Ahmed's revert.
Thanks
On Mon, Mar 25, 2019 at 9:38 PM Dominik Holler wrote:
> This issue seems to be solved now, please let me know if not.
>
> The build
>
This issue seems to be solved now, please let me know if not.
The build
https://jenkins.ovirt.org/view/oVirt system
tests/job/ovirt-system-tests_manual/4408/
with ovirt-engine d0a215d862eb819f0bbdd51fed012f9b972c1bdf
which includes Ahmed's commit a236c90d54652503d43d8315582effb74050d22e
the patch was merged on 4.3 but test run [1] has build [2] which is one
patch before Ahmed's merge...
[1] http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/21/
[2] ovirt-engine-4.3.2.2-0.0.master.20190324105929.git8b0969c.el7.noarch.rpm
On Mon, Mar 25, 2019 at 8:42 AM Dan
but http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/21/
is still failing
Was your patch merged?
On Sun, Mar 24, 2019 at 10:14 AM Ahmad Khiet wrote:
> patched 4.3!
>
> On Sun, Mar 24, 2019 at 9:06 AM Eitan Raviv wrote:
>
>> After some off line discussions it seems that the
patched 4.3!
On Sun, Mar 24, 2019 at 9:06 AM Eitan Raviv wrote:
> After some off line discussions it seems that the change that should be
> implemented in order to solve the original problem (remove host fails due
> to disconnect storage in progress) is to leave the host in 'preparing for
>
After some off line discussions it seems that the change that should be
implemented in order to solve the original problem (remove host fails due
to disconnect storage in progress) is to leave the host in 'preparing for
maintenance' until all relevant storage operations are completed.
On Sat, Mar
Unfortunately, the network suite is still failing on
Cannot edit Host. Related operation is currently in progress. Please try
again later.
Can you check if that's the same issue? Did you revert from 4.3 too?
http://jenkins.ovirt.org/job/ovirt-system-tests_network-suite-4.3/19/
On Sat, 23 Mar
The patch was reverted on Thursday
On Sat, Mar 23, 2019 at 8:48 PM Dan Kenigsberg wrote:
> I was told that intervening in the host state machine is delicate, but
> I think that this is the only correct approach.
>
> Benny, Ahmad, Tal: do you have a plan to resolve this? We are entering
> a
I was told that intervening in the host state machine is delicate, but
I think that this is the only correct approach.
Benny, Ahmad, Tal: do you have a plan to resolve this? We are entering
a third week with this constant failure.
On Wed, Mar 20, 2019 at 2:42 PM Eitan Raviv wrote:
>
> I am not
I am not sure that locking both groups would be sufficient, because there
is still a chance that the removeNetowrks request will start and acquire
the lock before the DisconnectStorage operation starts.
So probably the correct and full proof solution is to not move the host to
maintenance until
On Sun, Mar 17, 2019 at 3:04 PM Eyal Edri wrote:
> Not sure if all the same issue, but seems to failing around the same time:
>
>   ovirt-system-tests_hc-basic-suite-4.2 1 day 12 hr - #824 12 hr - #825 57
> min  integ-tests
>   ovirt-system-tests_hc-basic-suite-master 2 days 12 hr - #1043
We should probably lock both groups in
VdsEventListener#processStorageOnVdsInactive or in RemoveVdsCommand
Ahmad, please evaluate and adjust
On Wed, Mar 20, 2019 at 11:25 AM Eitan Raviv wrote:
> At least for the network suite these failures are due to the fact that
>
At least for the network suite these failures are due to the fact that
DisconnectStoragePoolVDSCommand
is not finished and a change_cluster request is issued by the test but
revoked by engine which fails the test setup.
Ahmed,
Can you please have a look at the below engine log [2]- currently
13 matches
Mail list logo