[ovirt-devel] Re: Is OST service down?

2024-04-03 Thread Marcin Sobczyk

Hi,

On 3/27/24 08:43, Sandro Bonazzola wrote:

Hi, tried to trigger OST test for 
https://github.com/oVirt/ovirt-engine/pull/887 but got this error: 
https://github.com/oVirt/ovirt-engine/actions/runs/8448193473
Any clue on what got broken in the meanwhile?


there was a problem in the datacenter. We've rebooted the executors and 
things should be back to normal now. Thanks for noticing that!


Regards, Marcin



--
Sandro Bonazzola
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/B2J5V4LUL3BYZBPAITUXGJQWSCPQZCB4/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HZ2I2UDSPM3WN6LVGM3HKBTQZZXAIMAO/


[ovirt-devel] Re: Change in DNF COPR plugin requires action

2022-08-23 Thread Marcin Sobczyk



On 8/23/22 14:48, Strahil Nikolov wrote:
Is there any option that allows to configure the plugin back to the old 
default ?


Don't think so. Looking at the implementation [2][3] it seems passing a 
cmdline argument is the way to go. You can probably define the copr repo 
by yourself, but I think the former is an easier option.


[2] 
https://github.com/rpm-software-management/dnf-plugins-core/blob/25d2cffeadc63e4116e1a6751b7bc6494784ce71/plugins/copr.py#L244
[3] 
https://github.com/rpm-software-management/dnf-plugins-core/blob/25d2cffeadc63e4116e1a6751b7bc6494784ce71/plugins/copr.py#L252




Best Regards,
Strahil Nikolov

On Tue, Aug 23, 2022 at 13:05, Marcin Sobczyk
 wrote:


On 8/23/22 11:39, Marcin Sobczyk wrote:
 > Hi All,
 >
 > recently there was a PR merged to DNF COPR plugin [1] that makes the
 > default chroot on CentOS Stream 9 to be 'epel-9'. Because of this
we now
 > need to specify the 'centos-stream-9' chroot name that we use in
oVirt
 > COPR explicitly, i.e.:
 >
 >   dnf copr enable -y ovirt/ovirt-master-snapshot centos-stream-9
 >
 > I can see there's a bunch of places throughout our projects that
need an
 > update. I'll post PRs for the ones I can spot.

Posted:

https://github.com/oVirt/ovirt-imageio/pull/133
<https://github.com/oVirt/ovirt-imageio/pull/133>
https://github.com/oVirt/ovirt-provider-ovn/pull/24
<https://github.com/oVirt/ovirt-provider-ovn/pull/24>
https://github.com/oVirt/ovirt-web-ui/pull/1627
<https://github.com/oVirt/ovirt-web-ui/pull/1627>
https://github.com/oVirt/buildcontainer/pull/16
<https://github.com/oVirt/buildcontainer/pull/16>


 >
 > Regards, Marcin
 >
 > [1]
 >
https://github.com/rpm-software-management/dnf-plugins-core/pull/459/files
<https://github.com/rpm-software-management/dnf-plugins-core/pull/459/files>
___
Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org>
To unsubscribe send an email to devel-le...@ovirt.org
<mailto:devel-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6NS56C2URFRTQRCIB3VYQW555DEEVRWG/

<https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6NS56C2URFRTQRCIB3VYQW555DEEVRWG/>


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XAA23HGCC6B3KEECTXEOPVAAYKAQYA4G/


[ovirt-devel] Please use RPM versions that are fine-enough

2022-06-09 Thread Marcin Sobczyk

Hi All,

I was analyzing a problematic situation with ovirt-web-ui build breaking 
OST for a day or two. It turned out the problem was about RPM versions 
not being fine enough. The project used only the date part in their RPM 
names, so the builds were named like this:


 1.8.2-0.20220607.git703b8f7.fc35
 1.8.2-0.20220607.git26ac92c.fc35
 1.8.2-0.20220607.gitf519b50.fc35

The problem with this approach is that among these versions the one with 
the "biggest SHA" is considered to be the latest one, which is not 
always true. The solution in this case would be to use finer RPM 
versions that include at least the hour and the minute of the build 
(although I can see most projects use seconds too). I posted [1] to fix 
this in ovirt-web-ui, but I can see other projects suffer from this 
problem too (i.e. java-ovirt-engine-sdk4, ovirt-engine-ui-extensions) 
[2]. Please make sure that you use RPM versions that are fine-enough.


Thanks, Marcin

[1] https://github.com/oVirt/ovirt-web-ui/pull/1599
[2] 
https://copr.fedorainfracloud.org/coprs/ovirt/ovirt-master-snapshot/builds/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BDOGSFDL4C3SHJF6S3V4J3POBCS4/


[ovirt-devel] OST change requires action

2022-04-20 Thread Marcin Sobczyk

Hi All,

for those of you running OST locally - with [1] merged you will need to 
adapt your executor slightly. The new requirement is to have the 
"storage" image installed on top of the other images. It's available 
through the same repos we use for publishing the regular ones, so it 
should suffice to run:


 sudo dnf install ost-images-storage-base

to have your executor working again.

The './setup_for_ost.sh' script has been updated to handle the change too.

Regards, Marcin

[1] https://github.com/oVirt/ovirt-system-tests/pull/76
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ASHJE3F3NS3SWETCY67WD33FPECLSSLV/


[ovirt-devel] Re: Vdsm: /ost not working due to "no artifacts with RPM files"

2022-03-21 Thread Marcin Sobczyk



On 3/21/22 13:30, Marcin Sobczyk wrote:



On 3/21/22 11:53, Milan Zamazal wrote:

Milan Zamazal  writes:


Hi,

/ost failed in
https://github.com/oVirt/vdsm/pull/99/checks?check_run_id=5605825076,
apparently because it thinks there are no artifacts.  But the artifacts
are there: https://github.com/oVirt/vdsm/actions/runs/2005557814.


The same happens when running OST locally:

   ./ost.sh run basic-suite-master el8stream 
--custom-repo=https://github.com/oVirt/vdsm/pull/99

   ...
   RuntimeError: GH pr/commit/run 
https://github.com/oVirt/vdsm/pull/99 had no artifacts with RPM files.



Does anybody know what's the problem?


I've looked deeper into this - the code resolved the passed PR to this 
check:


https://github.com/oVirt/vdsm/actions/runs/2005557816

Most probably, until now, we were luckily picking the right job, but it 
seems we'll have to iterate over them and consider multiple candidates.


I'll create a task for this in OST.

Regards, Marcin



Hi,

hard to tell. I've tried downloading these manually and things do look 
ok. The code that resolves PRs to artifacts is here [1].


Harel, would you have a moment to look into this? We could use some 
debugging logs in this area.


Regards, Marcin

[1] 
https://github.com/oVirt/ovirt-system-tests/blob/6c6e05a74eb0753154865f3221c1dc44eb7d90eb/ost_utils/deployment_utils/package_mgmt.py#L68 





Thanks,
Milan

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O2P5KPDY2UU3RTLIU2W3X7GJLW25J5UH/ 


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NITZQOFJEHIVI3UUYH2EYINSQOFTENVQ/


[ovirt-devel] Re: Vdsm: /ost not working due to "no artifacts with RPM files"

2022-03-21 Thread Marcin Sobczyk



On 3/21/22 11:53, Milan Zamazal wrote:

Milan Zamazal  writes:


Hi,

/ost failed in
https://github.com/oVirt/vdsm/pull/99/checks?check_run_id=5605825076,
apparently because it thinks there are no artifacts.  But the artifacts
are there: https://github.com/oVirt/vdsm/actions/runs/2005557814.


The same happens when running OST locally:

   ./ost.sh run basic-suite-master el8stream 
--custom-repo=https://github.com/oVirt/vdsm/pull/99
   ...
   RuntimeError: GH pr/commit/run https://github.com/oVirt/vdsm/pull/99 had no 
artifacts with RPM files.


Does anybody know what's the problem?


Hi,

hard to tell. I've tried downloading these manually and things do look 
ok. The code that resolves PRs to artifacts is here [1].


Harel, would you have a moment to look into this? We could use some 
debugging logs in this area.


Regards, Marcin

[1] 
https://github.com/oVirt/ovirt-system-tests/blob/6c6e05a74eb0753154865f3221c1dc44eb7d90eb/ost_utils/deployment_utils/package_mgmt.py#L68




Thanks,
Milan

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O2P5KPDY2UU3RTLIU2W3X7GJLW25J5UH/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ALTLNL25C6Q57R3UEV3VGHEB2JJCKGM5/


[ovirt-devel] Re: [ovirt-users] Re: gerrit.ovirt.org upgrade

2022-02-01 Thread Marcin Sobczyk



On 2/1/22 09:58, Denis Volkov wrote:

Hello Marcin

Sorry for the late reply

I will try checking if that feature can be disabled. Meanwhile I think 
you could use the `comments` url to get the comments directly:
`curl -s -L https://gerrit.ovirt.org/changes/118398/comments|sed 
<https://gerrit.ovirt.org/changes/118398/comments|sed> 1d|jq '.'`
If needed, field `change_message_id` can be used to link message in 
`details` and `comments` URLs


Hi, yeah, I was able to use the /comments endpoint for my purposes,
so we can stick with comment chips (although I still find them really 
weird :) ).


Thanks, Marcin




On Mon, Jan 31, 2022 at 12:32 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:


Hi,

thanks for handling the upgrade.

With the new version it's not possible anymore to get contents of the
comments which we need for OST gating.

i.e. for this patch:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/118398
<https://gerrit.ovirt.org/c/ovirt-system-tests/+/118398>

if you try:

curl -L https://gerrit.ovirt.org/changes/118398/detail
<https://gerrit.ovirt.org/changes/118398/detail> | \
      sed 1d | \
      jq -r '.messages[]'

you'll see for one of my last comments:

...
"message": "Patch Set 1:\n\n(1 comment)"
...

but there's no "ci ost" string, which is what I actually wrote.

It's probably because of the new "comment chips" feature.
If it's possible and there are no objections could you please try
turning it off?

Regards, Marcin


On 1/28/22 18:48, Denis Volkov wrote:
 > Hello
 >
 > Upgrade to version 3.4.3 is finished. Gerrit is up and running.
 >
 > In case of issues please create ticket in issue tracking system:
 > https://issues.redhat.com/projects/CPDEVOPS/issues
<https://issues.redhat.com/projects/CPDEVOPS/issues>
 > <https://issues.redhat.com/projects/CPDEVOPS/issues
<https://issues.redhat.com/projects/CPDEVOPS/issues>>
 >
 > --
 >
 > Denis Volkov
 >
 >
 >
 > ___
 > Users mailing list -- us...@ovirt.org <mailto:us...@ovirt.org>
 > To unsubscribe send an email to users-le...@ovirt.org
<mailto:users-le...@ovirt.org>
 > Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
 > oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
 > List Archives:

https://lists.ovirt.org/archives/list/us...@ovirt.org/message/I676ITKOHKLH4Q7CMRGVSYHYYTPKKHXA/

<https://lists.ovirt.org/archives/list/us...@ovirt.org/message/I676ITKOHKLH4Q7CMRGVSYHYYTPKKHXA/>



--

Denis Volkov

Red Hat <https://www.redhat.com>

<https://www.redhat.com>


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4QQKN27UYE3B3H2NDWIRTZYAZBTIGG4X/


[ovirt-devel] Re: [ovirt-users] Re: gerrit.ovirt.org upgrade

2022-01-31 Thread Marcin Sobczyk

Hi,

thanks for handling the upgrade.

With the new version it's not possible anymore to get contents of the 
comments which we need for OST gating.


i.e. for this patch:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/118398

if you try:

curl -L https://gerrit.ovirt.org/changes/118398/detail | \
sed 1d | \
jq -r '.messages[]'

you'll see for one of my last comments:

...
"message": "Patch Set 1:\n\n(1 comment)"
...

but there's no "ci ost" string, which is what I actually wrote.

It's probably because of the new "comment chips" feature.
If it's possible and there are no objections could you please try 
turning it off?


Regards, Marcin


On 1/28/22 18:48, Denis Volkov wrote:

Hello

Upgrade to version 3.4.3 is finished. Gerrit is up and running.

In case of issues please create ticket in issue tracking system: 
https://issues.redhat.com/projects/CPDEVOPS/issues 



--

Denis Volkov



___
Users mailing list -- us...@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/I676ITKOHKLH4Q7CMRGVSYHYYTPKKHXA/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/U4RFPFAGCTHEN4Y7HD4LF5ADPEWHJYU4/


[ovirt-devel] OST fails with "Error loading module from /usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"

2021-12-24 Thread Marcin Sobczyk

Hi All,

OST currently fails all the time during engine setup.
Here's a piece of ansible log that's seen repeatedly and I think 
describes the problem:


11:07:54 E "engine-config",
11:07:54 E "-s",
11:07:54 E "OvfUpdateIntervalInMinutes=10"
11:07:54 E ],
11:07:54 E "delta": "0:00:01.142926",
11:07:54 E "end": "2021-12-24 11:06:37.894810",
11:07:54 E "invocation": {
11:07:54 E "module_args": {
11:07:54 E "_raw_params": "engine-config -s 
OvfUpdateIntervalInMinutes='10' ",

11:07:54 E "_uses_shell": false,
11:07:54 E "argv": null,
11:07:54 E "chdir": null,
11:07:54 E "creates": null,
11:07:54 E "executable": null,
11:07:54 E "removes": null,
11:07:54 E "stdin": null,
11:07:54 E "stdin_add_newline": true,
11:07:54 E "strip_empty_ends": true,
11:07:54 E "warn": false
11:07:54 E }
11:07:54 E },
11:07:54 E "item": {
11:07:54 E "key": "OvfUpdateIntervalInMinutes",
11:07:54 E "value": "10"
11:07:54 E },
11:07:54 E "msg": "non-zero return code",
11:07:54 E "rc": 1,
11:07:54 E "start": "2021-12-24 11:06:36.751884",
11:07:54 E "stderr": "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false",
11:07:54 E "stderr_lines": [
11:07:54 E "Picked up JAVA_TOOL_OPTIONS: -Dcom.redhat.fips=false"
11:07:54 E ],
11:07:54 E "stdout": "Error loading module from 
/usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml",

11:07:54 E "stdout_lines": [
11:07:54 E "Error loading module from 
/usr/share/ovirt-engine/modules/common/org/springframework/main/module.xml"


We do set some config values for the engine in OST when running 
engine-setup. I tried commenting these out, but then engine failed 
health check anyway:


"Status code was 503 and not [200]: HTTP Error 503: Service Unavailable"

The last working set of OST images was the one from Dec 23, 2021 2:05:08 
AM. The first broken one is from Dec 24, 2021 2:05:09 AM. The shipped 
ovirt-engine RPMs doesn't seem to contain any important changes for 
these two sets, but AFAICS the newer ovirt-dependencies RPM did take in 
a couple of patches that look suspicious [1][2][3]. The patches were 
merged on November 16th, but it seems they were first used in that 
broken set from Dec 24 (the one from Dec 23 seems to contain 
ovirt-dependencies RPM based on this [4] commit).


I wanted to try out an older version of ovirt-dependencies, but I think 
they were wiped out from resources.ovirt.org.


I will disable cyclic el8stream OST runs for now, cause all of them 
fail. If there's anyone working and able to make a build with those 
patches reverted and test that out, please ping me and I'll re-enable them.


Regards, Marcin

[1] https://gerrit.ovirt.org/c/ovirt-dependencies/+/114699
[2] https://gerrit.ovirt.org/c/ovirt-dependencies/+/113877
[3] https://gerrit.ovirt.org/c/ovirt-dependencies/+/114654
[4] https://gerrit.ovirt.org/c/ovirt-dependencies/+/117459
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MON3B5NSSXBL2BFSLYYIMOTKUR2CK732/


[ovirt-devel] GitHub Actions-based CI for vdsm

2021-11-26 Thread Marcin Sobczyk

Hi All,

I've been working on adding GitHub Actions-based CI to vdsm today.
Feel free to check out the patches here:

https://gerrit.ovirt.org/q/topic:%22github-actions-ci%22+(status:open%20OR%20status:merged)

Some comments:
- the linters work fine already, we can start using them

- RPM building works too in general. I think the RPM versions are not 
right yet, so I'll look into this. After the 'rpm' job is done we get a 
zipfile with all the built RPMs inside. In the future we may want to run 
'createrepo_c' on this dir as well, so we'll have a ready-to-be-used 
repository in that zip.


- 'tests' are working too, but we have a couple of failures we'd need to 
look at. This job, unlike the others, runs in GitHub's Ubuntu VM inside 
which we use a privileged container for running the tests.


- didn't try 'tests-storage' yet

- Not to waist precious free CI minutes and storage, we run linters 
first, tests after that, but only under the condition that the linters 
didn't fail, and finally we build RPMs, but this time under the 
condition that the tests didn't fail.


You can find some of the runs I made in my personal fork here:

https://github.com/tinez/vdsm/actions/workflows/ci.yml

Comments, remarks and reviews are highly appreciated.

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KNZMFTBRMTQOKNKNBKGASQ5QJFUNFRMJ/


[ovirt-devel] Re: [OST] Documentation update needed

2021-11-24 Thread Marcin Sobczyk



On 11/24/21 13:54, Aviv Litman wrote:

Hi All,
I tried to follow the steps documented in oVirt System Tests Docs 
.
After consulting with Marcin, I understand that we haven't use 
./run_suite.sh  for a loong time now.

I think the right command is for example:
./ost.sh run basic-suite-master el8stream 
--custom-repo=https://jenkins.ovirt.org/job/ovirt-dwh_standard-check-patch/1346/ 



We don't know who maintains this docs.


Anton, Galit, can we get rid of those outdated docs?
Either let's remove the server completely or redirect to current README.md

Regards, Marcin



Thanks!
--

Aviv Litman

BI Associate Software Engineer

Red Hat

alit...@redhat.com 




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DREQ6TQA6STLVN24WSG4DXHOREKYG645/


[ovirt-devel] Re: VDSM build failure: unsupported pickle protocol: 5

2021-10-29 Thread Marcin Sobczyk



On 10/28/21 18:52, Nir Soffer wrote:

On Thu, Oct 28, 2021 at 2:08 PM Marcin Sobczyk  wrote:


Hi,

On 10/28/21 11:59, Sandro Bonazzola wrote:

hi,
I'm trying to enable copr builds for vdsm (
https://gerrit.ovirt.org/c/vdsm/+/117368
<https://gerrit.ovirt.org/c/vdsm/+/117368> )

And it's currently failing to rebuild src.rpm (generated on Fedora 34)
for el8 with the following error:
(https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-8-x86_64/02912480-vdsm/build.log.gz
<https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-8-x86_64/02912480-vdsm/build.log.gz>
)

make[2]: Entering directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm'
Making all in api
make[3]: Entering directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api'
Generate vdsm-api.html
chmod u+w .
PYTHONPATH=./../../:./../../vdsm \
   ./schema_to_html.py vdsm-api ./vdsm-api.html
Traceback (most recent call last):
File "./schema_to_html.py", line 250, in 
  main()
File "./schema_to_html.py", line 245, in main
  api_schema = vdsmapi.Schema((schema_type,), strict_mode=False)
File "/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api/vdsmapi.py", line 
145, in __init__
  loaded_schema = pickle.loads(f.read())
ValueError: unsupported pickle protocol: 5
make[3]: *** [Makefile:697: vdsm-api.html] Error 1
make[3]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api'
make[2]: *** [Makefile:644: all-recursive] Error 1
make[2]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm'
make[1]: *** [Makefile:466: all-recursive] Error 1
make[1]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib'
make: *** [Makefile:539: all-recursive] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.nDfLzv (%build)

Sounds like the make dist process ran on Fedora 34 brings a source file used at 
build time on el8 in a cpickle format which is not backward compatible.

Seems to be a bug, the cpickle shouldn't be included in the tar.gz, it should 
be generated at build time.

Comments?


The pickles were introduced here:

https://gerrit.ovirt.org/c/vdsm/+/94196

AFAIR they were added to the vdsm-api package because previously we were
generating them in during rpm installation in %post section which caused
issues with oVirt Node.

I'm not sure how easy will it be to not have them in the srpm/tar.gz but
have them in the rpm. As a quick workaround I can change the pickle
protocol version that we use [1] to '4', which should work for both
Fedora 34 and el8.


Creating the pickle when building srpm is bad. This means you must use
some old (slow) pickle protocol that works on all possible platforms, instead
of the highest (fast) protocol available on the target platform.

We should really create the pickle when building the rpm, which is done
in mock, with the right python version.

If we cannot do this then building the cache on the host during configure time
will be an easy solution. Check if the cache exists in /var/lib/cache/vdsm/...
and regenerate it if needed.

If we don't want to run the schema_to_html tool at this time, we can add
the schema in json format - this can be done when building the srpm.
Converting json to pickle is very fast and does not have any dependencies.



Ack, let's have this quick workaround for now and implement a proper 
solution later.


Regards, Marcin


Nir


Regards, Marcin

[1]
https://github.com/oVirt/vdsm/blob/1969ab99c371ad498ea8693671cec60e2d0d49c2/lib/vdsm/api/schema_to_pickle.py#L46




--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/>

**
*Red Hat respects your work life balance. Therefore there is no need to
answer this email out of your office hours.
*
*

*





___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/I7X73HSOVKYPQDQKMPAC7R5MLGQZH4N7/


[ovirt-devel] Re: VDSM build failure: unsupported pickle protocol: 5

2021-10-28 Thread Marcin Sobczyk

Hi,

On 10/28/21 11:59, Sandro Bonazzola wrote:

hi,
I'm trying to enable copr builds for vdsm ( 
https://gerrit.ovirt.org/c/vdsm/+/117368 
 )


And it's currently failing to rebuild src.rpm (generated on Fedora 34) 
for el8 with the following error:
(https://download.copr.fedorainfracloud.org/results/ovirt/ovirt-master-snapshot/centos-stream-8-x86_64/02912480-vdsm/build.log.gz 
 
)


make[2]: Entering directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm'
Making all in api
make[3]: Entering directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api'
   Generate vdsm-api.html
chmod u+w .
PYTHONPATH=./../../:./../../vdsm \
./schema_to_html.py vdsm-api ./vdsm-api.html
Traceback (most recent call last):
   File "./schema_to_html.py", line 250, in 
 main()
   File "./schema_to_html.py", line 245, in main
 api_schema = vdsmapi.Schema((schema_type,), strict_mode=False)
   File "/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api/vdsmapi.py", line 
145, in __init__
 loaded_schema = pickle.loads(f.read())
ValueError: unsupported pickle protocol: 5
make[3]: *** [Makefile:697: vdsm-api.html] Error 1
make[3]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm/api'
make[2]: *** [Makefile:644: all-recursive] Error 1
make[2]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib/vdsm'
make[1]: *** [Makefile:466: all-recursive] Error 1
make[1]: Leaving directory '/builddir/build/BUILD/vdsm-4.50.0.1/lib'
make: *** [Makefile:539: all-recursive] Error 1
error: Bad exit status from /var/tmp/rpm-tmp.nDfLzv (%build)

Sounds like the make dist process ran on Fedora 34 brings a source file used at 
build time on el8 in a cpickle format which is not backward compatible.

Seems to be a bug, the cpickle shouldn't be included in the tar.gz, it should 
be generated at build time.

Comments?


The pickles were introduced here:

https://gerrit.ovirt.org/c/vdsm/+/94196

AFAIR they were added to the vdsm-api package because previously we were 
generating them in during rpm installation in %post section which caused 
issues with oVirt Node.


I'm not sure how easy will it be to not have them in the srpm/tar.gz but 
have them in the rpm. As a quick workaround I can change the pickle
protocol version that we use [1] to '4', which should work for both 
Fedora 34 and el8.


Regards, Marcin

[1] 
https://github.com/oVirt/vdsm/blob/1969ab99c371ad498ea8693671cec60e2d0d49c2/lib/vdsm/api/schema_to_pickle.py#L46





--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 

 

**
*Red Hat respects your work life balance. Therefore there is no need to 
answer this email out of your office hours.

*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VERWVBLKQ4TNVMT74DUUZO4KLEEEPDVH/


[ovirt-devel] Re: VDSM test failures on CentOS Stream 9

2021-09-15 Thread Marcin Sobczyk



On 9/15/21 5:28 PM, Marcin Sobczyk wrote:


On 9/15/21 1:14 PM, Sandro Bonazzola wrote:


Il giorno mer 15 set 2021 alle ore 12:20 Marcin Sobczyk
mailto:msobc...@redhat.com>> ha scritto:



 On 9/15/21 10:28 AM, Sandro Bonazzola wrote:
 - cut -
 > Any chance someone can investigate them?
 >
 
https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157
 
<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157>

 >
 
<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157
 
<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157>>
 >
 I wanted to try out CentOS Stream 9. Saw the ISO in the other mailing
 thread and the installation went fine.
 There are no repos however and I can't even install basic stuff like
 git. Any advices?


cat /etc/yum.repos.d/centos.repo
[baseos-pre-release]
name=CentOS Stream $releasever - BaseOS (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/BaseOS/$basearch/os/
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/BaseOS/$basearch/os/>
#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0
skip_if_unavailable=False
enabled=1

[appstream-pre-release]
name=CentOS Stream $releasever - AppStream (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/AppStream/$basearch/os/
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/AppStream/$basearch/os/>
enabled=1
#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0
exclude=gluster*

[crb-pre-release]
name=CentOS Stream $releasever - CRB (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/CRB/$basearch/os/
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/CRB/$basearch/os/>
enabled=1
#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0



Thank you!

2 more fixes from me:

https://gerrit.ovirt.org/c/vdsm/+/116733
https://gerrit.ovirt.org/c/vdsm/+/116734

Actually given my findings in the second patch we need to double check
that the '_ANY_CPU' variable [1] is defined before we pin vdsm to single 
CPU.

Otherwise all the other spawned commands will run on single CPU as well.

[1] 
https://github.com/oVirt/vdsm/blob/3cad5b9237ce2030861132173d4fc6bb9782fc08/lib/vdsm/common/cmdutils.py#L46




Regards, Marcin




 Regards, Marcin

 > --
 >
 > Sandro Bonazzola
 >
 > MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
 >
 > Red Hat EMEA <https://www.redhat.com/ <https://www.redhat.com/>>
 >
 > sbona...@redhat.com <mailto:sbona...@redhat.com>
 <mailto:sbona...@redhat.com <mailto:sbona...@redhat.com>>
 >
 > <https://www.redhat.com/ <https://www.redhat.com/>>
 >
 > *Red Hat respects your work life balance. Therefore there is no
 need
 > to answer this email out of your office hours.
 > *
 > *
 >
 > *



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need
to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TEOSRXVOWAQHDGOO73VCJD4CCYUXC2V6/


[ovirt-devel] Re: VDSM test failures on CentOS Stream 9

2021-09-15 Thread Marcin Sobczyk



On 9/15/21 1:14 PM, Sandro Bonazzola wrote:



Il giorno mer 15 set 2021 alle ore 12:20 Marcin Sobczyk 
mailto:msobc...@redhat.com>> ha scritto:




On 9/15/21 10:28 AM, Sandro Bonazzola wrote:
- cut -
> Any chance someone can investigate them?
>

https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157

<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157>

>

<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157

<https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157>>
>
I wanted to try out CentOS Stream 9. Saw the ISO in the other mailing
thread and the installation went fine.
There are no repos however and I can't even install basic stuff like
git. Any advices?


cat /etc/yum.repos.d/centos.repo
[baseos-pre-release]
name=CentOS Stream $releasever - BaseOS (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/BaseOS/$basearch/os/ 
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/BaseOS/$basearch/os/>

#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0
skip_if_unavailable=False
enabled=1

[appstream-pre-release]
name=CentOS Stream $releasever - AppStream (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/AppStream/$basearch/os/ 
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/AppStream/$basearch/os/>

enabled=1
#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0
exclude=gluster*

[crb-pre-release]
name=CentOS Stream $releasever - CRB (pre-release)
baseurl=https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/CRB/$basearch/os/ 
<https://composes.stream.centos.org/production/latest-CentOS-Stream/compose/CRB/$basearch/os/>

enabled=1
#gpgkey=file:///usr/share/distribution-gpg-keys/centos/RPM-GPG-KEY-CentOS-Official
gpgcheck=0



Thank you!

2 more fixes from me:

https://gerrit.ovirt.org/c/vdsm/+/116733
https://gerrit.ovirt.org/c/vdsm/+/116734

Regards, Marcin





Regards, Marcin

> --
>
> Sandro Bonazzola
>
> MANAGER, SOFTWARE ENGINEERING, EMEA R RHV
>
> Red Hat EMEA <https://www.redhat.com/ <https://www.redhat.com/>>
>
> sbona...@redhat.com <mailto:sbona...@redhat.com>
<mailto:sbona...@redhat.com <mailto:sbona...@redhat.com>>
>
> <https://www.redhat.com/ <https://www.redhat.com/>>
>
> *Red Hat respects your work life balance. Therefore there is no
need
> to answer this email out of your office hours.
> *
> *
>
> *



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EXW56UL5MOVUC6EEX4HS3GHUC5XU7TIF/


[ovirt-devel] Re: VDSM test failures on CentOS Stream 9

2021-09-15 Thread Marcin Sobczyk



On 9/15/21 10:28 AM, Sandro Bonazzola wrote:
- cut -

Any chance someone can investigate them?
https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/29667/pipeline/157 



I wanted to try out CentOS Stream 9. Saw the ISO in the other mailing 
thread and the installation went fine.
There are no repos however and I can't even install basic stuff like 
git. Any advices?


Regards, Marcin


--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 

 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MR36MCJRFTNLBGPBBIMQIFA6MJCTZJZD/


[ovirt-devel] Re: Vdsm 4.4.8 branch?

2021-08-16 Thread Marcin Sobczyk

+1 from me.

On 8/12/21 12:58 PM, Milan Zamazal wrote:

Hi,

we are after code freeze for 4.4.8 now.  Can we create a 4.4.8 branch in
Vdsm now and start 4.4.9 development on master or does anybody need to
postpone the branch?

Thanks,
Milan


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4HJQNK7MDRNIDUJYMOPP5D7VW2OQ4QO7/


[ovirt-devel] Re: OST: How to run ost.sh with a local custom repo?

2021-08-04 Thread Marcin Sobczyk



On 8/4/21 1:30 PM, Milan Zamazal wrote:

Hi,

when I try to run ost.sh on a local repo with
--custom-repo=file:///home/pdm/rpmbuild/repodata/repomd.xml, I get the
following error:

   requests.exceptions.InvalidSchema: No connection adapters were found for 
'file:///home/pdm/rpmbuild/repodata/repomd.xml

I tried using file:/home/... and /home/... but neither works.
https://... works fine.

How can I run the script with a custom repo in a local directory?

Hi,

this is most probably caused by:

https://github.com/oVirt/ovirt-system-tests/blob/0ad56d467ac0e608c568f597188db08117b7565d/ost_utils/ost_utils/pytest/fixtures/deployment.py#L85

you can comment out this line and most probably your problems will go away.
If it works, please let us know here, and I'll adapt the code to make it 
work

with local repos like these.

Regards, Marcin



Thanks,
Milan
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2GPXTSXP5PSWOB6VQEVAHKJZUZSRLO6C/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IMFJZKYD7V6RJ7YIRKLYALSKSCV3CLHC/


[ovirt-devel] Re: Add "ci system-test" command

2021-06-24 Thread Marcin Sobczyk



On 6/23/21 5:44 PM, Nir Soffer wrote:

Similar to "ci build", "ci test", "ci merge" add a new command that
triggers OST run.

Running OST is tied now in vdsm (and engine?) to Code-Review: +2.
This causes trouble and does not allow non-maintainers to use the convenient OST
infrastructure.

Expected flow:

1. User add a comment with "ci system-test"

"ci system-test" is sooo long, I vote for "ci ost".

Regards, Marcin


2. OST flow building and running OST triggered
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2FCJZLFJJ2SB3KVQ3YREZBVEYXPBQRUN/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PEYFGBP34OB4YCLQHTJUILWIIQEOA4MA/


[ovirt-devel] Re: basic OST: Error initializing source docker, unauthorized: incorrect username or password

2021-06-24 Thread Marcin Sobczyk



On 6/23/21 2:13 PM, Yedidyah Bar David wrote:

On Wed, Jun 23, 2021 at 10:29 AM Marcin Sobczyk  wrote:



On 6/23/21 7:31 AM, Yedidyah Bar David wrote:

Hi all,

On Mon, Jun 21, 2021 at 8:55 PM  wrote:

Project: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/1235/

This is a week-old build, but some (but not all) of the check-patch
OST runs I did yesterday failed the same way, e.g. [1][2]:

==
failed on setup with "ost_utils.shell.ShellError: Command failed with
rc=125. Stdout:

Stderr:
Trying to pull docker.io/selenium/hub:3.141.59-20210422...
unable to retrieve auth token: invalid username/password:
unauthorized: incorrect username or password
Error: 1 error occurred:
* Error initializing source docker://selenium/hub:3.141.59-20210422:
unable to retrieve auth token: invalid username/password:
unauthorized: incorrect username or password"
==

Is this a known issue? Perhaps related to docker's rate-limiting?

Also stumbled upon this some time ago, but it stopped reproducing somehow.
There's a bug filed for this - CPDEVOPS-176.

Link, please? This one says "Something's gone wrong":

https://ovirt-jira.atlassian.net/browse/CPDEVOPS-176

This now happened again:

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17456/

I've copied selenium images to our own ovirt repository in quay.io
and posted a patch for OST to make the switch:

https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115401/

Let's see if this works for us.

Regards, Marcin



Best regards,


Regards, Marcin


[1] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17434/

[2] 
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17434/testReport/junit/basic-suite-master.test-scenarios/test_100_basic_ui_sanity/Invoking_jobs___check_patch_basic_suite_master_el8_x86_64___test_secure_connection_should_fail_without_root_ca_firefox_/

Best regards,



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JTAQRT7CF6VE7D3CRICRCORAAZIG66DC/


[ovirt-devel] Re: basic OST: Error initializing source docker, unauthorized: incorrect username or password

2021-06-23 Thread Marcin Sobczyk



On 6/23/21 7:31 AM, Yedidyah Bar David wrote:

Hi all,

On Mon, Jun 21, 2021 at 8:55 PM  wrote:

Project: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/1235/

This is a week-old build, but some (but not all) of the check-patch
OST runs I did yesterday failed the same way, e.g. [1][2]:

==
failed on setup with "ost_utils.shell.ShellError: Command failed with
rc=125. Stdout:

Stderr:
Trying to pull docker.io/selenium/hub:3.141.59-20210422...
   unable to retrieve auth token: invalid username/password:
unauthorized: incorrect username or password
Error: 1 error occurred:
* Error initializing source docker://selenium/hub:3.141.59-20210422:
unable to retrieve auth token: invalid username/password:
unauthorized: incorrect username or password"
==

Is this a known issue? Perhaps related to docker's rate-limiting?

Also stumbled upon this some time ago, but it stopped reproducing somehow.
There's a bug filed for this - CPDEVOPS-176.

Regards, Marcin



[1] https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17434/

[2] 
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/17434/testReport/junit/basic-suite-master.test-scenarios/test_100_basic_ui_sanity/Invoking_jobs___check_patch_basic_suite_master_el8_x86_64___test_secure_connection_should_fail_without_root_ca_firefox_/

Best regards,

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PO7YKBUOY5YZY6ONK7BSE2FCFBA5K6QS/


[ovirt-devel] Re: ansible OST: Failed to hot-plug disk

2021-06-23 Thread Marcin Sobczyk



On 6/23/21 9:12 AM, Yedidyah Bar David wrote:

On Wed, Jun 23, 2021 at 8:15 AM Yedidyah Bar David  wrote:

Hi all,

Please see [1], failed twice so far:

 "msg": "Fault reason is \"Operation Failed\". Fault detail is
\"[Failed to hot-plug disk]\". HTTP response code is 400."

vdsm.log has:

libvirt.libvirtError: unsupported configuration: IOThreads only
available for virtio pci and virtio ccw disk

Seems like [1], caused by reverting the patches to require libvirt <
7.4. Do we have a workaround? Or just wait for a fix?

We've merged a workaround for the basic suite [2], but it seems
we need another one for ansible suite. I'll look into this.

Regards, Marcin



[1] https://bugzilla.redhat.com/show_bug.cgi?id=1974096

[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115336/




Thanks and best regards,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_ansible-suite-master/1972/
--
Didi




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NTFV35NZ4AIKHTMARNWR4XVHRLLB7H3C/


[ovirt-devel] Re: OST verifed -1 broken, fails for infra issue in OST

2021-06-21 Thread Marcin Sobczyk

Hi,

On 6/14/21 1:14 PM, Nir Soffer wrote:

I got this wrong review from OST, which looks like an infra issue in OST:

Patch:
https://gerrit.ovirt.org/c/vdsm/+/115232

Error:
https://gerrit.ovirt.org/c/vdsm/+/115232#message-46ad5e75_ed543485

Failing code:

Package(*line.split()) for res in results.values() > for line in
_filter_results(res['stdout'].splitlines()) ] E TypeError: __new__()
missing 2 required positional arguments: 'version' and 'repo'
ost_utils/ost_utils/deployment_utils/package_mgmt.py:177: TypeError

I hope someone working on OST can take a look soon.

Sure, the fix is merged already:

https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115249/

Regards, Marcin



Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YCV6RJQMBEY6MQGOYR7IQ2JCJ4JD34NH/


[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Marcin Sobczyk



On 6/17/21 6:59 PM, Yedidyah Bar David wrote:

On Thu, Jun 17, 2021 at 6:27 PM Marcin Sobczyk  wrote:



On 6/17/21 1:44 PM, Yedidyah Bar David wrote:

On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:

Hi,

I now tried running locally hc-basic-suite-master with a patched OST,
and it failed due to $subject. I checked and see that this also
happened on CI, e.g. [1], before it started failing to to an unrelated
reason later:

E   TASK [gluster.infra/roles/firewall_config : Add/Delete
services to firewalld rules] ***
E   failed: [lago-hc-basic-suite-master-host-0]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-2]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-1]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}

This seems similar to [2], and indeed I can't see the package
'glusterfs-server' installed locally on host-0. Any idea?

I think I understand:

It seems like the deployment of hc relied on the order of running the deploy
scripts as written in lagoinitfile. With the new deploy code, all of them run
in parallel. Does this make sense?

The scripts run in parallel as in "on all VMs at the same time", but
sequentially
as in "one script at a time on each VM" - this is the same behavior we
had with lago deployment.

Well, I do not think it works as intended, then. When running locally,
I logged into host-0, and after it failed, I had:

# dnf history
ID | Command line

| Date and time| Action(s)  | Altered
--
  4 | install -y --nogpgcheck ansible gluster-ansible-roles
ovirt-hosted-engine-setup ovirt-ansible-hosted-engine-setup
ovirt-ansible-reposit | 2021-06-17 11:54 | I, U   |8
  3 | -y --nogpgcheck install ovirt-host python3-coverage
vdsm-hook-vhostmd
  | 2021-06-08 02:15 | Install|  493 EE
  2 | install -y dnf-utils
https://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm
 | 2021-06-08 02:14 |
Install|1
  1 |

| 2021-06-08 02:06 | Install|  511 EE

Meaning, it already ran setup_first_host.sh (and failed there), but
didn't run hc_setup_host.sh, although it appears before it.

If you check [1], which is a build that failed due to this reason
(unlike the later ones), you see there:

-- Captured log setup --
2021-06-07 01:58:38+,594 INFO
[ost_utils.pytest.fixtures.deployment] Waiting for SSH on the VMs
(deployment:40)
2021-06-07 01:59:11+,947 INFO
[ost_utils.deployment_utils.package_mgmt] oVirt packages used on VMs:
(package_mgmt:133)
2021-06-07 01:59:11+,948 INFO
[ost_utils.deployment_utils.package_mgmt]
vdsm-4.40.70.2-1.git34cdc8884.el8.x86_64 (package_mgmt:135)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-1 (scripts:36)
2021-06-07 01:59:11+,950 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-2 (scripts:36)
2021-06-07 01:59:11+,952 INFO
[ost_utils.deployment_utils.scripts] Running
/home/jenkins/workspace/ovirt-system-tests_hc-basic-suite-master/ovirt-system-tests/common/deploy-scripts/setup_host.sh
on lago-hc-basic-suite-master-host-0 (scripts:36)

[ovirt-devel] Re: hc-basic-suite-master fails due to missing glusterfs firewalld services

2021-06-21 Thread Marcin Sobczyk



On 6/17/21 1:44 PM, Yedidyah Bar David wrote:

On Wed, Jun 16, 2021 at 1:23 PM Yedidyah Bar David  wrote:

Hi,

I now tried running locally hc-basic-suite-master with a patched OST,
and it failed due to $subject. I checked and see that this also
happened on CI, e.g. [1], before it started failing to to an unrelated
reason later:

E   TASK [gluster.infra/roles/firewall_config : Add/Delete
services to firewalld rules] ***
E   failed: [lago-hc-basic-suite-master-host-0]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-2]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}
E   failed: [lago-hc-basic-suite-master-host-1]
(item=glusterfs) => {"ansible_loop_var": "item", "changed": false,
"item": "glusterfs", "msg": "ERROR: Exception caught:
org.fedoraproject.FirewallD1.Exception: INVALID_SERVICE: 'glusterfs'
not among existing services Permanent and Non-Permanent(immediate)
operation, Services are defined by port/tcp relationship and named as
they are in /etc/services (on most systems)"}

This seems similar to [2], and indeed I can't see the package
'glusterfs-server' installed locally on host-0. Any idea?

I think I understand:

It seems like the deployment of hc relied on the order of running the deploy
scripts as written in lagoinitfile. With the new deploy code, all of them run
in parallel. Does this make sense?
The scripts run in parallel as in "on all VMs at the same time", but 
sequentially
as in "one script at a time on each VM" - this is the same behavior we 
had with lago deployment.


Regards, Marcin




Thanks and best regards,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_hc-basic-suite-master/2088/

[2] https://github.com/oVirt/ovirt-ansible/issues/124
--
Didi



--
Didi


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DVNFOM2NHSZO6G4CR2MA6YXKZ26Q6UJU/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-21 Thread Marcin Sobczyk



On 6/20/21 12:23 PM, Dana Elfassy wrote:

Hi,
I'm getting packages conflicts when trying to upgrade my Centos8.4 and 
Centos-Stream hosts.
(Centos Stream was installed from iso, then I 
installed ovirt-release-master.rpm and deployed the host)

The details below are the output for Centos-Stream
* The packages conflicts occur also on OST - 
https://rhv-devops-jenkins.rhev-ci-vms.eng.rdu2.redhat.com/job/ds-ost-baremetal_manual/7211/console 



Do you know what could've caused this and how it can be fixed?
Yes, libvirt 7.4.0 + qemu-kvm 6.0.0 is currently broken and has bugs 
filed on it.
We're trying to avoid these packages by excluding them on vdsm's spec 
level [1]
and downgrading to older versions (7.0.0 and 5.2.0 respectively) that 
work in OST [2].

Unfortunately somewhere around late Friday a new version of qemu-kvm
was published, which makes the downgrade process go from 6.0.0-19 to 
6.0.0-18
and not the 5.2.0 that works. We don't have a reasonable resolution for 
OST yet.


If you manage your host manually simply 'dnf downgrade qemu-kvm' until 
you get version 5.2.0

or download and install all the older RPMs manually.

Regards, Marcin

[1] https://gerrit.ovirt.org/#/c/vdsm/+/115193/
[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115194/


Thanks,
Dana

[root@localhost ~]# rpm -q vdsm
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64

[root@localhost ~]# dnf module list virt
Last metadata expiration check: 1:09:54 ago on Sun 20 Jun 2021 
05:09:50 AM EDT.

CentOS Stream 8 - AppStream
Name                              Stream               Profiles  Summary
virt                              rhel [d][e]                common 
[d]  Virtualization module


The error:
[root@localhost ~]# dnf update
Last metadata expiration check: 1:08:13 ago on Sun 20 Jun 2021 
05:09:50 AM EDT.

Error:
 Problem 1: package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires 
(libvirt-daemon-kvm >= 7.0.0-14 and libvirt-daemon-kvm < 7.4.0-1), but 
none of the providers can be installed
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-35.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-36.module_el8.5.0+821+97472045.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-5.6.0-10.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-17.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.0.0-25.2.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-13.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-7.1.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-6.6.0-7.3.el8.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-13.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.4.0-1.el8s.x86_64 and 
libvirt-daemon-kvm-7.0.0-14.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-13.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-14.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install both libvirt-daemon-kvm-7.0.0-9.el8s.x86_64 and 
libvirt-daemon-kvm-7.4.0-1.el8s.x86_64
  - cannot install the best update candidate for package 
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - cannot install the best update candidate for package 
libvirt-daemon-kvm-7.0.0-14.1.el8.x86_64
 Problem 2: problem with installed package 
vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64
  - package vdsm-4.40.70.4-5.git73fbe23cd.el8.x86_64 requires 
(qemu-kvm >= 15:5.2.0 and qemu-kvm < 15:6.0.0), but none of the 
providers can be installed
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.2.0-16.el8s.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-48.module_el8.5.0+746+bbd5d70c.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-51.module_el8.5.0+821+97472045.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.1.0-23.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-19.el8.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-29.el8.3.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:4.2.0-29.el8.6.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.1.0-14.el8.1.x86_64
  - cannot install both qemu-kvm-15:6.0.0-19.el8s.x86_64 and 
qemu-kvm-15:5.1.0-20.el8.x86_64
  

[ovirt-devel] Re: Moving #vdsm to #ovirt?

2021-06-21 Thread Marcin Sobczyk



On 6/21/21 2:36 PM, Nir Soffer wrote:

We had mostly dead #vdsm channel in freenode[1].

Recently there was a hostile takeover of freenode, and old freenode
folks created
libera[2] network. Most (all?) projects moved to this network.

We can move #vdsm to libera, but I think we have a better option, use
#ovirt channel
in oftc[3], which is pretty lively.

Having vdsm developers in #ovirt channel is good for the project and
will make it easier
to reach developers.

Moving to libera require registration work. Moving to #ovirt requires no change.
In both cases we need to update vdsm readme and ovirt.org.

What do you think?

+1



[1] https://freenode.net/
[2] https://libera.chat/
[3] https://www.oftc.net/

Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/53EQIGW4EZ7OAVHIFBEW4F5BDTPLXOA7/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-09 Thread Marcin Sobczyk



On 6/9/21 11:28 AM, Marcin Sobczyk wrote:


On 6/9/21 9:25 AM, Marcin Sobczyk wrote:

Hi,

On 6/9/21 9:08 AM, Sandro Bonazzola wrote:

Il giorno mer 9 giu 2021 alle ore 09:05 Martin Perina
mailto:mper...@redhat.com>> ha scritto:

  Hi,

  wouldn't it be worth to enable CentOS 8.4 OST again? Or at least
  as a single run to verify those packages?


it would be great

I'll try to handle it today and will report back the results.

OST images have been published to the u/s repo [1].
I've checked that they contain appropriate package versions.
I've pushed [2] and waiting for CI results - hopefully the agents will
pick them up.
I'm also installing them on my server to verify manually.
The results are in [3]. Basic suite failed on UI tests for unrelated 
reason - we have

a problem with podman in our infra which I'm already discussing with Ehud.
I've ran the UI tests locally and they were fine.
I guess we're good then :)

Regards, Marcin

[3] 
https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/17203/pipeline/154




Regards,

[1] https://templates.ovirt.org/yum/
[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115164/

Regards, Marcin


  Martin


  On Wed, Jun 9, 2021 at 8:56 AM Sandro Bonazzola
  mailto:sbona...@redhat.com>> wrote:

  Hi all,

  New versions of AV packages in CentOS Virt SIG for CentOS
  Linux 8.4 are available on
  
https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/
  
<https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/>
  . Please help with testing so we can release it.

  hivex-1.3.18-21.el8
  libguestfs-1.44.0-2.el8
  libguestfs-winsupport-8.2-1.el8
  libiscsi-1.18.0-8.el8
  libnbd-1.6.0-3.el8
  libosinfo-1.8.0-1.el8
  libtpms-0.7.4-4.20201106git2452a24dab.el8
  libvirt-dbus-1.3.0-2.el8
  libvirt-7.0.0-14.1.el8
  libvirt-python-7.0.0-1.el8
  nbdkit-1.24.0-1.el8
  netcf-0.2.8-12.el8
  perl-Sys-Virt-7.0.0-1.el8
  python-pyvmomi-6.7.1-7.el8
  qemu-kvm-5.2.0-16.el8
  seabios-1.14.0-1.el8
  sgabios-0.20170427git-3.el8
  SLOF-20200717-1.gite18ddad8.el8
  supermin-5.2.1-1.el8
  swtpm-0.4.2-1.20201201git2df14e3.el8
  virglrenderer-0.8.2-1.el8
  virt-v2v-1.42.0-10.el8

  If no negative feedback will be reported, the plan is to tag
  it for release tomorrow.


  --

  Sandro Bonazzola

  MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

  Red Hat EMEA <https://www.redhat.com/>

  sbona...@redhat.com <mailto:sbona...@redhat.com>

  <https://www.redhat.com/>   

  *Red Hat respects your work life balance. Therefore there is
  no need to answer this email out of your office hours.
  *
  *

  *
  ___
  Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org>
  To unsubscribe send an email to devel-le...@ovirt.org
  <mailto:devel-le...@ovirt.org>
  Privacy Statement: https://www.ovirt.org/privacy-policy.html
  <https://www.ovirt.org/privacy-policy.html>
  oVirt Code of Conduct:
  https://www.ovirt.org/community/about/community-guidelines/
  <https://www.ovirt.org/community/about/community-guidelines/>
  List Archives:
  
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TXMYDYQANWG44YQJFTKRQSTSHVXU4MQ/
  
<https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TXMYDYQANWG44YQJFTKRQSTSHVXU4MQ/>



  --
  Martin Perina
  Manager, Software Engineering
  Red Hat Czech s.r.o.



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need
to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QLLBS233MBGNTBJNIDIX3DIDUM7D2E46/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-09 Thread Marcin Sobczyk



On 6/9/21 9:25 AM, Marcin Sobczyk wrote:

Hi,

On 6/9/21 9:08 AM, Sandro Bonazzola wrote:


Il giorno mer 9 giu 2021 alle ore 09:05 Martin Perina
mailto:mper...@redhat.com>> ha scritto:

 Hi,

 wouldn't it be worth to enable CentOS 8.4 OST again? Or at least
 as a single run to verify those packages?


it would be great

I'll try to handle it today and will report back the results.

OST images have been published to the u/s repo [1].
I've checked that they contain appropriate package versions.
I've pushed [2] and waiting for CI results - hopefully the agents will 
pick them up.

I'm also installing them on my server to verify manually.

Regards,

[1] https://templates.ovirt.org/yum/
[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/115164/


Regards, Marcin


 Martin


 On Wed, Jun 9, 2021 at 8:56 AM Sandro Bonazzola
 mailto:sbona...@redhat.com>> wrote:

 Hi all,

 New versions of AV packages in CentOS Virt SIG for CentOS
 Linux 8.4 are available on
 
https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/
 
<https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/>
 . Please help with testing so we can release it.

 hivex-1.3.18-21.el8
 libguestfs-1.44.0-2.el8
 libguestfs-winsupport-8.2-1.el8
 libiscsi-1.18.0-8.el8
 libnbd-1.6.0-3.el8
 libosinfo-1.8.0-1.el8
 libtpms-0.7.4-4.20201106git2452a24dab.el8
 libvirt-dbus-1.3.0-2.el8
 libvirt-7.0.0-14.1.el8
 libvirt-python-7.0.0-1.el8
 nbdkit-1.24.0-1.el8
 netcf-0.2.8-12.el8
 perl-Sys-Virt-7.0.0-1.el8
 python-pyvmomi-6.7.1-7.el8
 qemu-kvm-5.2.0-16.el8
 seabios-1.14.0-1.el8
 sgabios-0.20170427git-3.el8
 SLOF-20200717-1.gite18ddad8.el8
 supermin-5.2.1-1.el8
 swtpm-0.4.2-1.20201201git2df14e3.el8
 virglrenderer-0.8.2-1.el8
 virt-v2v-1.42.0-10.el8

 If no negative feedback will be reported, the plan is to tag
 it for release tomorrow.


 --

 Sandro Bonazzola

 MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

 Red Hat EMEA <https://www.redhat.com/>

 sbona...@redhat.com <mailto:sbona...@redhat.com>

 <https://www.redhat.com/>

 *Red Hat respects your work life balance. Therefore there is
 no need to answer this email out of your office hours.
 *
 *

 *
 ___
 Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org>
 To unsubscribe send an email to devel-le...@ovirt.org
 <mailto:devel-le...@ovirt.org>
 Privacy Statement: https://www.ovirt.org/privacy-policy.html
 <https://www.ovirt.org/privacy-policy.html>
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 <https://www.ovirt.org/community/about/community-guidelines/>
 List Archives:
 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TXMYDYQANWG44YQJFTKRQSTSHVXU4MQ/
 
<https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TXMYDYQANWG44YQJFTKRQSTSHVXU4MQ/>



 --
 Martin Perina
 Manager, Software Engineering
 Red Hat Czech s.r.o.



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need
to answer this email out of your office hours.
<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XO6JE5VWMVG4L7TB4YXOYKQJI7JONY2Z/


[ovirt-devel] Re: AV 8.4 for CentOS Linux

2021-06-09 Thread Marcin Sobczyk

Hi,

On 6/9/21 9:08 AM, Sandro Bonazzola wrote:



Il giorno mer 9 giu 2021 alle ore 09:05 Martin Perina 
mailto:mper...@redhat.com>> ha scritto:


Hi,

wouldn't it be worth to enable CentOS 8.4 OST again? Or at least
as a single run to verify those packages?


it would be great

I'll try to handle it today and will report back the results.

Regards, Marcin



Martin


On Wed, Jun 9, 2021 at 8:56 AM Sandro Bonazzola
mailto:sbona...@redhat.com>> wrote:

Hi all,

New versions of AV packages in CentOS Virt SIG for CentOS
Linux 8.4 are available on

https://buildlogs.centos.org/centos/8/virt/x86_64/advanced-virtualization/


. Please help with testing so we can release it.

hivex-1.3.18-21.el8
libguestfs-1.44.0-2.el8
libguestfs-winsupport-8.2-1.el8
libiscsi-1.18.0-8.el8
libnbd-1.6.0-3.el8
libosinfo-1.8.0-1.el8
libtpms-0.7.4-4.20201106git2452a24dab.el8
libvirt-dbus-1.3.0-2.el8
libvirt-7.0.0-14.1.el8
libvirt-python-7.0.0-1.el8
nbdkit-1.24.0-1.el8
netcf-0.2.8-12.el8
perl-Sys-Virt-7.0.0-1.el8
python-pyvmomi-6.7.1-7.el8
qemu-kvm-5.2.0-16.el8
seabios-1.14.0-1.el8
sgabios-0.20170427git-3.el8
SLOF-20200717-1.gite18ddad8.el8
supermin-5.2.1-1.el8
swtpm-0.4.2-1.20201201git2df14e3.el8
virglrenderer-0.8.2-1.el8
virt-v2v-1.42.0-10.el8

If no negative feedback will be reported, the plan is to tag
it for release tomorrow.


-- 


Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 

 

*Red Hat respects your work life balance. Therefore there is
no need to answer this email out of your office hours.
*
*

*
___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TXMYDYQANWG44YQJFTKRQSTSHVXU4MQ/





-- 
Martin Perina

Manager, Software Engineering
Red Hat Czech s.r.o.



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA 

sbona...@redhat.com 

 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YIQC5T64OVUDYHTPVTB7D6VQ537XRT2W/


[ovirt-devel] Re: OST HE fails due to empty CPU type (was: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 2038 - Still Failing!)

2021-06-01 Thread Marcin Sobczyk

Hi,


On 6/1/21 8:25 AM, Yedidyah Bar David wrote:

  Hi all,

On Tue, Jun 1, 2021 at 5:23 AM  wrote:

Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2038/

This has been failing for a week now. Not sure about the root cause.

There's a bug for this [1]
Yesterday I pushed a workaround to ost-images for this problem [2], so 
if you update images you should be good.


Regards, Marcin

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1961558
[2] https://gerrit.ovirt.org/#/c/ost-images/+/115002/


 From HE deploy code POV:

https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/2038/artifact/exported-artifacts/test_logs/he-basic-suite-master/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-ansible-create_target_vm-20210601042210-i8r7ks.log
:

2021-06-01 04:22:22,497+0200 DEBUG var changed: host "localhost" var
"cluster_facts" type "" value: "{
 "changed": false,
 "failed": false,
 "ovirt_clusters": [
 {
 "affinity_groups": [],
 "ballooning_enabled": true,
 "comment": "",
 "cpu": {
 "architecture": "undefined",
 "type": ""
 },

Meaning, the engine says that cluster Default's cpu type is "". The
code uses this value as-is, and a few tasks later fails in:

2021-06-01 04:22:26,815+0200 DEBUG ansible on_any args TASK:
ovirt.ovirt.hosted_engine_setup : Convert CPU model name  kwargs
is_conditional:False
2021-06-01 04:22:26,816+0200 DEBUG ansible on_any args localhost TASK:
ovirt.ovirt.hosted_engine_setup : Convert CPU model name  kwargs
2021-06-01 04:22:26,974+0200 DEBUG var changed: host "localhost" var
"ansible_play_hosts" type "" value: "[]"
2021-06-01 04:22:26,974+0200 DEBUG var changed: host "localhost" var
"ansible_play_batch" type "" value: "[]"
2021-06-01 04:22:26,974+0200 DEBUG var changed: host "localhost" var
"play_hosts" type "" value: "[]"
2021-06-01 04:22:26,975+0200 ERROR ansible failed {
 "ansible_host": "localhost",
 "ansible_playbook":
"/usr/share/ovirt-hosted-engine-setup/ansible/trigger_role.yml",
 "ansible_result": {
 "_ansible_no_log": false,
 "msg": "The task includes an option with an undefined
variable. The error was: 'dict object' has no attribute ''\n\nThe
error appears to be in
'/usr/share/ansible/collections/ansible_collections/ovirt/ovirt/roles/hosted_engine_setup/tasks/create_target_vm/01_create_target_hosted_engine_vm.yml':
line 64, column 5, but may\nbe elsewhere in the file depending on the
exact syntax problem.\n\nThe offending line appears to be:\n\n  {{
server_cpu_list['ovirt_system_option']['values'][0]['value'].split(';
')|list|difference(['']) }}\n  - name: Convert CPU model name\n^
here\n"
 },
 "ansible_task": "Convert CPU model name",
 "ansible_type": "task",
 "status": "FAILED",
 "task_duration": 0
}

Any ideas?

Thanks and best regards,

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7NB6SBX6ZTTIN65RFQBASWUEGVY2DFOW/


[ovirt-devel] Re: OST ansible inventory rework

2021-05-26 Thread Marcin Sobczyk

Hi,

On 5/25/21 4:07 PM, Yedidyah Bar David wrote:

Hi all,

As part of the discussion around [1], we talked about changing the way
we create and use the ansible inventory.

I am now in the middle of doing something about this. It turned our
far more complex and possibly less elegant than
expected/hoped/intended.

I'd like to get a high-level review for my WIP [2]. It's not tested
and not ready for review. But if people think this is going in the
correct direction, I'll continue. Otherwise, I'll give up - and then
we have to decide how to continue otherwise. Parts of this can most
likely be done more nicely, other parts should probably be completely
replaced/removed/redone.

While the 'Inventory' class is very nice, propagating that into other parts
of codebase indeed turned out to be quite a pickle... I was thinking about
it and did an attempt of a different approach - simply converting the 
inventory
provided by the backend to be a directory [3]. It also meets the design 
goals -
the backend is unaware of the HE VM and we can extend the inventory 
dynamically
by simply dropping additional 'he.yml' file there when we're ready. 
Preliminary

tests shown that there's no breakage so we should be able to avoid
the "global inventory fix" and stay with what we have right now.
WDYT?



We should probably also make some hard
decisions, which I am frankly not sure we can make without having more
concrete ideas about what other backends we want and how they would
look like. So for now I ignored all this and simply did a POC.
I definitely want to keep OST code backend-(or in fact lago-)independent 
cause
lago is not maintained and its raison d'être is inexistent... I doubt 
however we'll

find the time to actually move/implement a different backend to run OST.
I'd definitely like to see oVirt tested by oVirt some day (as that 
should be doable),

but I guess that's up to the community.

Regards, Marcin



So, WDYT?

Thanks and best regards,

[1] https://gerrit.ovirt.org/c/ovirt-system-tests/+/114653

[2] https://github.com/didib/ovirt-system-tests/commits/add-inventory


[3] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/114948/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HQRVGBFVQSIDMVR4RTUCGEBWMHBRH35J/


[ovirt-devel] Re: el8-stream is available in CI

2021-05-07 Thread Marcin Sobczyk



On 5/7/21 3:43 PM, Ehud Yonasi wrote:

Hi,

I’ve added ppc64 support in [1].
There was some (probably random) crash of the x86_64 build, but ppc 
seems to work fine indeed, thanks!


https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/27636/pipeline/151



Thanks.

[1] : https://gerrit.ovirt.org/c/jenkins/+/114701 



On 7 May 2021, at 10:45, Ales Musil > wrote:




On Thu, Apr 8, 2021 at 1:27 PM Ehud Yonasi > wrote:


Hey everyone,

I wanted to let you know that you can run your patches now on
el8-stream.

In order to do that simply add to the stdci yaml file the
following section:

distro: el8stream
runtime-requirements:
host-distro: newer

You can also see the example on the patch [1].

The runtime requirements part is due to lack compatibility issues
with el7 hosts.

If you see any problems, or have any questions please let me know.

Thanks,
Ehud.

[1]: https://gerrit.ovirt.org/#/c/jenkins/+/114174/6/stdci.yaml

___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5XK6Y4BHIYWFRLTNTWYJHTSIQELOSU4W/




Hi,

I have created ticket [0] also for ppc64le architecture.
It is needed to unblock work on vdsm for 4.4.7 [1].

Thanks,
Ales

[0] https://ovirt-jira.atlassian.net/browse/OVIRT-3093 

[1] https://gerrit.ovirt.org/c/vdsm/+/114660 




--
Ales Musil
Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.com  IM: amusil





___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JX7EYNSPGTOXGVCDLGNU5K6ZALFREC3V/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JKDM5QKTYOUEJY4JAV43SGF4645DUTDJ/


[ovirt-devel] Re: OST FAILURES

2021-04-22 Thread Marcin Sobczyk

Hi,

On 4/22/21 4:19 PM, Ahmad Khiet wrote:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7970/ 



You passed:

https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/11893/artifact/build-artifacts.el8.x86_64/ovirt-engine-4.4.6.5-0.0.master.20210421113712.git92b872c276b.el8.noarch.rpm

as CUSTOM_REPOS - this won't work.
Please take a look at the comments around the CUSTOM_REPOS parameter.
If you wanted to test that engine build, you should've used instead:

https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/11893/artifact/build-artifacts.el8.x86_64/

Please remember, that the patch for which you ran 'build artifacts' 
needs to be rebased
on top of master. Otherwise your build will be older than the one here 
[1], which would
cause your RPM not to be used in the run, which will make OST fail with 
a message like:

"None of user custom repos has been used".

Regards, Marcin

[1] https://resources.ovirt.org/repos/ovirt/tested/master/rpm/el8/



On Thu, Apr 22, 2021 at 3:18 PM Eyal Shenitzky > wrote:


Ahmad, please attach the link for the failed run and the relevant
error that you see.

On Thu, 22 Apr 2021 at 14:08, Ahmad Khiet mailto:akh...@redhat.com>> wrote:

Hi,

I'm trying to run OST for my patch, and, as I see all patches
are failed because of some errors before the test and after it.


Have a nice day

-- 


Ahmad Khiet

Red Hat

akh...@redhat.com 
M: +972-54-6225629 



___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5XNTPZ7I7NVUCP22VLEFAPCU6RVSJL5L/





-- 
Regards,

Eyal Shenitzky



--

Ahmad Khiet

Red Hat

akh...@redhat.com 
M: +972-54-6225629 




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GDP4GXWRFIHLMAA2ZW6UVFTUXLCX2A5J/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VWMBTHCJ6F2M7M4TKYMK4KJBZVH6YIOY/


[ovirt-devel] Re: Thank you from an oVirt user

2021-04-08 Thread Marcin Sobczyk

Hi,

On 4/7/21 9:29 PM, Scott Sobotka wrote:

Dear oVirt developers,

I just wanted to drop you all a note to thank you for the great work 
you've done with oVirt. I've been using it for several years and am 
vastly impressed with your robust and stable solution to what is, 
frankly, a very complex set of problems. Thank you so much. Your 
efforts are greatly appreciated.

Thanks for these kind words :)
May oVirt serve you well!

Regards, Marcin



Thanks and have a great day,
--Scott Sobotka



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KYUWNBUMNHUYDLWAXRCS6NWBUKG4UYDO/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZXEZ4MJGEFRH2RDV3L3VESJ6KRE5QARG/


[ovirt-devel] Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1974 - Still Failing!

2021-04-06 Thread Marcin Sobczyk



On 4/6/21 9:55 AM, Yedidyah Bar David wrote:

On Tue, Apr 6, 2021 at 9:24 AM Marcin Sobczyk  wrote:

Hi,

On 4/6/21 7:23 AM, Yedidyah Bar David wrote:

On Mon, Apr 5, 2021 at 5:53 AM  wrote:

Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1974/

FYI: This failed twice in a row (1973 and 1974), for the same reason.
I reproduced locally, looked a bit, failed to find the root cause.
When I connected
to host-1's console, it was stuck in emergency after reboot. I checked
a bit, there
was some error about kdump failing to read the kernel image
( /boot/vmlinuz-4.18.0-240.15.1.el8_3.x86_64 ), when I tried manually
as root I did
manage to read it. I rebooted, and the VM came up fine. I decided to
try OST again,
cleaned up and ran it, and opened a 'lago console' on the vm after it
was up, but
OST passed. Tried again, passed again. Then I manually ran in CI 1975
and it passed,
and also the nightly 1976 passed. So I am going to ignore for now.

I think we need a patch to make lago/OST log consoles of all the VMs.
I might try
to work on this.

Also stumbled upon this. Please take a look at
https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/114050/

Yes, I did notice this change and wondered if it's related...

But it's not merged yet, and still HE passed at least 4 times (two locally,
two on CI). Obviously this does not prove that the issue is fixed.

Anyway, in addition to merely fixing it (which perhaps your patch does),
I also wanted to emphasize the importance of making it easier to fix
future such cases. How did you manage to find the root cause?

My case was similar - HE suite was failing for me constantly. I noticed
host-1 drops to emergency shell, so I just 'virsh console'd inside
and went through the logs. That's when I spotted the problem with
the additional '/var/tmp' disk. I tried the fix on my machine and HE
suite started working again. Moments later I tried running HE suite
without the patch and it was successful again.

I couldn't figure out what's the real cause behind these problems,
but removing the unnecessary additional disk from host-1 seemed
to do the trick.

+1 for logging consoles of the VMs - that should help with these kind
of problems in the future.

Regards, Marcin



Best regards,

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3H2HXEGUTWYV23EL7QT6NJETCLHN6MWG/


[ovirt-devel] Re: [oVirt Jenkins] ovirt-system-tests_he-basic-suite-master - Build # 1974 - Still Failing!

2021-04-06 Thread Marcin Sobczyk

Hi,

On 4/6/21 7:23 AM, Yedidyah Bar David wrote:

On Mon, Apr 5, 2021 at 5:53 AM  wrote:

Project: https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1974/

FYI: This failed twice in a row (1973 and 1974), for the same reason.
I reproduced locally, looked a bit, failed to find the root cause.
When I connected
to host-1's console, it was stuck in emergency after reboot. I checked
a bit, there
was some error about kdump failing to read the kernel image
( /boot/vmlinuz-4.18.0-240.15.1.el8_3.x86_64 ), when I tried manually
as root I did
manage to read it. I rebooted, and the VM came up fine. I decided to
try OST again,
cleaned up and ran it, and opened a 'lago console' on the vm after it
was up, but
OST passed. Tried again, passed again. Then I manually ran in CI 1975
and it passed,
and also the nightly 1976 passed. So I am going to ignore for now.

I think we need a patch to make lago/OST log consoles of all the VMs.
I might try
to work on this.
Also stumbled upon this. Please take a look at 
https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/114050/


Regards, Marcin



Best regards,

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/T6STZLXWV3QG2IN2QZ43XZ5QAKXSRW4L/


[ovirt-devel] Re: basic suite failing on test_004_basic_sanity.test_verify_template_disk_copied_and_removed

2021-03-03 Thread Marcin Sobczyk

Hi,

On 3/3/21 7:54 AM, Yedidyah Bar David wrote:

On Wed, Mar 3, 2021 at 12:16 AM Vojtech Juranek  wrote:

It looks like OST in meantime got broken even more
and now fails before running tests with:

 21:39:51 ../basic-suite-master/test-scenarios/test_098_ovirt_provider_ovn.py:26: 
in 
 21:39:51 import requests
 21:39:51 E   ModuleNotFoundError: No module named 'requests'

I don't see any recent commit which can cause this. So maybe
some issue with OST images?

I don't think that's related. I can only guess that it's due to [1]:

"pylint installed: WARNING: The directory
'/home/jenkins/workspace/ovirt-system-tests_basic-suite-master_nightly/ovirt-system-tests/.cache/pip'
or its parent directory is not owned or is not writable by the current
user." ... "requests==2.25.1"

Whereas last successful run [2], as well as some later failed ones,
have the same warning, but use "2.20.0" (from the OS, it seems) and
also have:

/usr/lib/python3.6/site-packages/requests/__init__.py:91
   /usr/lib/python3.6/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.26.3) or chardet (3.0.4) doesn't
match a supported version!
 RequestsDependencyWarning)

So perhaps the failure is related to having a newer package on pypi
and not enough permissions for installing it. Just a guess.
Posted [3] to handle this. Please ask someone with +2 on OST 
(Michal/Martin/Eitan) to review and merge - I'll be on PTO for the next 
~2.5 weeks.


Regards, Marcin

[3] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113743/



[1] 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/923/artifact/exported-artifacts/mock_logs/script/stdout_stderr.log

[2] 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/916/artifact/exported-artifacts/mock_logs/script/stdout_stderr.log



On Tuesday, 2 March 2021 13:40:07 CET Benny Zlotnik wrote:

I started the OST job two hours ago[1], but it's still pending and I
see another job stuck in pending, not sure if there's an issue with
the workers


[1]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-test
s_manual/7828
On Tue, Mar 2, 2021 at 12:52 PM Benny Zlotnik  wrote:



I posted a patch https://gerrit.ovirt.org/c/ovirt-engine/+/113740



On Tue, Mar 2, 2021 at 12:44 PM Eyal Shenitzky 
wrote:


Hi Marcin,



There were no changes in the test or with the verification recently.
Did something change with cirros_image_glance_disk_name?



On Tue, 2 Mar 2021 at 11:12, Marcin Sobczyk 
wrote:



Hi All,



basic suite started failing on
'test_verify_template_disk_copied_and_removed'.
Looking at the error reported (i.e. [1][2]) it seems that we can't get
a
valid handle to the glance disk.



Can someone take a look at this?



Regards, Marcin



[1]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-
tests_manual/7826/consoleFull#L2,310

  [2]

https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests
_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/15
774/pipeline#step-220-log-1064
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

  List

Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J5O6DPZQ
PSF7UT3L2NUG5FNLCS3FVHI4/> >




--
Regards,
Eyal Shenitzky
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

  List

Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TA466HK3Q
4FUWUUUZN7NHFPTEQTDA3YH/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/ List Archives:
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AB6GI5KHJDSQ3
ABYIHKMAICUMLNW2OE2/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/I66XVC6N6S6EYPVJK5GMRVKKXZTPVZU7/




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www

[ovirt-devel] basic suite failing on test_004_basic_sanity.test_verify_template_disk_copied_and_removed

2021-03-02 Thread Marcin Sobczyk

Hi All,

basic suite started failing on 
'test_verify_template_disk_copied_and_removed'.
Looking at the error reported (i.e. [1][2]) it seems that we can't get a 
valid handle to the glance disk.


Can someone take a look at this?

Regards, Marcin

[1] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7826/consoleFull#L2,310
[2] 
https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/15774/pipeline#step-220-log-1064

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J5O6DPZQPSF7UT3L2NUG5FNLCS3FVHI4/


[ovirt-devel] podman-remote finally works in CI for UI tests

2021-02-25 Thread Marcin Sobczyk

Hi all,

with latest update of podman we should finally able to run UI tests in 
CI again.


I posted a WIP patch for OST that should make this happen, please see 
the commit

message for details:

https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113715/

Ehud, Galit, could you please try to adapt the global setup process, so 
that it

runs podman system service?

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7HQ6SWMPXURVTKAHJDVAGU2T72252REY/


[ovirt-devel] Re: Publish master more often

2021-02-24 Thread Marcin Sobczyk



On 2/24/21 11:05 AM, Yedidyah Bar David wrote:

On Wed, Feb 24, 2021 at 11:43 AM Milan Zamazal  wrote:

Yedidyah Bar David  writes:


Hi all,

Right now, when we merge a patch e.g. to the engine (and many other
projects), it can take up to several days until it is used by the
hosted-engine ovirt-system-tests suite. Something similar will happen
soon if/when we introduce suites that use ovirt-node.

If I got it right:
- Merge causes CI to build the engine - immediately, takes ~ 1 hour (say)
- A publisher job [1] publishes it to resources.ovirt.org (daily,
midnight (UTC))
- The next run of an appliance build [2] includes it (daily, afternoon)
- The next run of the publisher [1] publishes the appliance (daily, midnight)
- The next run of ost-images [3] includes the appliance (daily,
midnight, 2 hours after the publisher) (and publishes it immediately)
- The next run of ost (e.g. [4]) will use it (daily, slightly *before*
ost-images, but I guess we can change that. And this does not affect
manual runs of OST, so can probably be ignored in the calculation, at
least to some extent).

So if I got it right, a patch merged to the engine in some morning,
will be used by the nightly run of OST HE only after almost 3 days,
and available for manual runs after 2 days. IMO that's too much time.
I might be somewhat wrong, but not very, I think.

One partial solution is to add automation .repos lines to relevant
projects that will link at lastSuccessfulBuild (let's call it lastSB)
of the more important projects they consume - e.g. appliance to use
lastSB of engine+dwh+a few others, node to use lastSB of vdsm, etc.
This will require more maintenance (adding/removing/fixing projects as
needed) and cause some more load on CI (as now packages will be
downloaded from it instead of from resources.ovirt.org).

Another solution is to run relevant jobs (publisher/appliance/node)
far more often - say, once every two hours.

One important thing to consider is an ability to run OST on our patches
at all.  If there is (almost) always a newer build available then custom
repos added to OST runs, whether on Jenkins or locally, will be ignored
and we'll be unable to test our patches before they are merged.

Indeed. That's an important point. IIRC OST has a ticket specifically
addressing this issue.

Yes, we have:

https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113223/

and:

https://issues.redhat.com/browse/RHV-41025

which is not implemented yet.

The downside of upgrading to the latest RPMs from 'tested' repo is, as 
Milan mentioned,
an increased chance that your own packages will not be used cause 
they're too old.
The upside is that if someone breaks OST globally with i.e. some engine 
patch,
and a fix for the problem is merged midday, upgrading to the latest RPMs 
will unblock the runs.
If we don't upgrade, we'll have to wait for the nightly job to rebuild 
ost-images to include the fix.
Rebuilding ost-images midday is an option, but it takes a lot of time, 
so in most cases

one can simply wait till tomorrow...

I want to fix this by implementing an option in OST's manual run 
(switched off by default)
that will allow you to upgrade to the latest RPMs from 'tested'. That 
way one has ~24h

for his/her patches to be fresh enough to be picked up by dnf.

'check-patch' jobs should always use latest RPMs from 'tested' IMO.




This will also add load, and might cause "perceived" instability - as
things will likely fluctuate between green and red more often.

This doesn't sound very good, I perceive the things less than stable
already now.

Agreed.
I quoted "perceived" because I do not think they'll actually be less stable.
Right now, when something critical is broken, we fix it, then manually
run some of the above jobs as needed, to quickly get back to business.
When we don't (often), some things simply remain broken for two days.

Running more often will simply notify us about breakage faster. If we
then fix, it will automatically propage the fix faster.

Isn't upgrading the engine RPM on the appliance an option?




I think I prefer the latter. What do you think?

Wouldn't it be possible to run the whole pipeline nightly (even if it
means e.g. running the publisher twice during the night)?

It will. But this will only fix the specific issue of appliance/node.
Running more often also simply gives feedback faster.

But I agree that perhaps we should wait with this until OST allows
using a custom repo reliably and easily.

Thanks,

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AOQRQM47RQJIRPQNQF5QL2L6M6GAXSBO/


[ovirt-devel] Re: test_verify_engine_certs (was: [oVirt Jenkins] ovirt-system-tests_basic-suite-master_nightly - Build # 894 - Failure!)

2021-02-22 Thread Marcin Sobczyk

Hi,

On 2/22/21 4:21 PM, Yedidyah Bar David wrote:

On Mon, Feb 22, 2021 at 4:51 PM Artur Socha  wrote:

Hi Didi,
You are probably right that enabling Strict Transport Security caused
that bug as an unfortunate side-effect.
Do you think that, adding some sort of exception for cert url would be
an acceptable fix?  For example we have this kind of rule for excluding
authentication for Rest api docs.

If we already have an exception, and hopefully some process to add one,
then I think it makes sense for this case as well.

I admit, though, that I do not feel completely happy with this. On one hand,
this is insecure, and on the other hand, there is no way to do this securely
using the existing official means.

This thread also made me think about the hosted-engine deploy process.
In standalone engine setup, the user is responsible for installing the OS,
so it's up to the user to control (or not) generation of the sshd private key
for allowing later secure access to it using ssh. For hosted-engine, it's us,
and I do not think we do anything around this. Perhaps we should.

TL;DR: IMO:
1. Please add an exception. Please open another bug for this.
2. We should document how to get the engine CA cert not using https:
ssh to the engine machine; cat /etc/pki/ovirt-engine/ca.pem .
3. We should consider our options for hosted-engine. Filed now [1].

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1931510

Best regards,

For now I posted a patch for OST that will unblock basic suite [2].
When we have a proper solution we should adapt the tests to the new way 
of working.


Regards, Marcin

[2] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113649/




Artur




On 22.02.2021 13:52, Yedidyah Bar David wrote:

On Mon, Feb 22, 2021 at 3:12 AM  wrote:

Project: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/
Build: 
https://jenkins.ovirt.org/job/ovirt-system-tests_basic-suite-master_nightly/894/
Build Number: 894
Build Status:  Failure
Triggered By: Started by timer

-
Changes Since Last Success:
-
Changes for Build #894
[Andrej Cernek] ost_utils: Remove explicit object inheritance




-
Failed Tests:
-
1 tests failed.
FAILED:  
basic-suite-master.test-scenarios.test_002_bootstrap.test_verify_engine_certs[CA
 certificate]

Error Message:
ost_utils.shell.ShellError: Command failed with rc=1. Stdout:  Stderr: unable 
to load certificate 139734854465344:error:0909006C:PEM routines:get_name:no 
start line:crypto/pem/pem_lib.c:745:Expecting: TRUSTED CERTIFICATE

Stack Trace:
key_format = 'X509-PEM-CA'
verification_fn =  at 0x7f6aab2add90>, engine_fqdn = 'engine'
engine_download = .download at 0x7f6aa98d5ea0>

 @pytest.mark.parametrize("key_format, verification_fn", [
 pytest.param(
 'X509-PEM-CA',
 lambda path: shell.shell(["openssl", "x509", "-in", path, "-text", 
"-noout"]),
 id="CA certificate"
 ),
 pytest.param(
 'OPENSSH-PUBKEY',
 lambda path: shell.shell(["ssh-keygen", "-l", "-f", path]),
 id="ssh pubkey"
 ),
 ])
 @order_by(_TEST_LIST)
 def test_verify_engine_certs(key_format, verification_fn, engine_fqdn,
  engine_download):
 url = 
'http://{}/ovirt-engine/services/pki-resource?resource=ca-certificate={}'

I guess (didn't check, only looked at engine git log) that this is a
result of [1].

Anyone looking at this?

This is trying to download the engine ca cert via http, and then do
some verification on it.

Generally speaking, this is a chicken-and-egg problem: You can't
securely download
a ca cert if you need this cert to securely download it.

For OST, it might be easy to fix by s/http/https/ and perhaps passing
some param to
make it not check certs in https. But I find it quite reasonable that
others are doing
similar things and will now be broken by this change [1]. If so, we
might decide that
this is "by design" - that whoever that gets broken, should fix their
stuff one way or
another (like OST above, or via safer means if possible/relevant, such
as using ssh
to securely connect to the engine machine and then get the cert from
there somehow
(do we have an api for this?)). Or we can decide that it's an engine
bug - that [1]
should have allowed this specific url to bypass hsts.

[1] https://gerrit.ovirt.org/c/ovirt-engine/+/113508


 with http_proxy_disabled(), tempfile.NamedTemporaryFile() as tmp:
 engine_download(url.format(engine_fqdn, key_format), tmp.name)
 try:

   verification_fn(tmp.name)

../basic-suite-master/test-scenarios/test_002_bootstrap.py:292:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../basic-suite-master/test-scenarios/test_002_bootstrap.py:275: in 
 lambda path: shell.shell(["openssl", "x509", "-in", path, "-text", 

[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Marcin Sobczyk



On 1/28/21 10:30 AM, Arik Hadas wrote:



On Thu, Jan 28, 2021 at 11:21 AM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:


Hi,

On 1/28/21 9:43 AM, Arik Hadas wrote:
> Hi,
> Seems like our changes to bios type handing lead to that.
> Interestingly, OST passed on the patches..
Can you please provide more info on the verification process?


Sure.
The OST job [1] passed on PS 16 of [2] and there was no change on the 
patch between PS 16 and PS 17 that got in.


[1] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/ 
<https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/>
[2] https://gerrit.ovirt.org/#/c/ovirt-engine/+/111657/ 
<https://gerrit.ovirt.org/#/c/ovirt-engine/+/111657/>


So here's some post-mortem analysis.

The repo used for the manual run [3] was [4]. The build has been already 
cleaned up by jenkins, so there's
no way to peek into what versions of engine were built there. We can 
estimate though, based on the date
of 'check-patch' job, which is Jan 22 4:48 PM. The OST run was done on 
Jan 25, 2021 11:41 AM. The built
packages were simply outdated when the OST run was made. We can actually 
see that in 'dnf.log' [5]:


2021-01-25T11:45:51Z INFO Dependencies resolved.
2021-01-25T11:45:51Z INFO

 Package  Arch  Version  Repository  Size
===
Upgrading:
 ...
 ovirt-engine  noarch 
4.4.5.3-0.0.master.20210125103910.gitd5d5142096e.el8 
ovirt-master-tested-el8  13 M


The version of ovirt-engine that was available in 
ovirt-master-tested-el8 repo is from Jan 25.


Some conclusions:
- we use very fresh versions of packages in OST. If you're planning to 
test a package of your own please rebase first
- if you're trying to test your own package please make sure it's 
actually used by OST run, you can check that in dnf.log files
- in the future we should have an automated way of telling if none of 
the packages provided by the user didn't land in any of OST's VMs. I 
filed [6] to address this.


Since this is blocking all basic suite runs I posted a patch [7] that 
disables USB on the VMs we create in the suite. Please review.


Regards, Marcin

[3] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/parameters/
[4] 
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/9989/artifact/check-patch.el8.x86_64/
[5] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7706/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/dnf.log/*view*/

[6] https://issues.redhat.com/browse/RHV-40844
[7] https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/113201/



Regards, Marcin

> Anyway, we look into it.
> Thanks for bringing this to our attention!
>
> On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek
mailto:vjura...@redhat.com>
> <mailto:vjura...@redhat.com <mailto:vjura...@redhat.com>>> wrote:
>
>     Hi,
>     OST fails constantly in test_check_snapshot_with_memory [1] with
>     NPE  in
>     VmDeviceUtils.updateUsbSlots [2]. Build with any additional
>     changes (custom
>     repo) is on [3].
>
>     Unfortunately, I wasn't able to find the root cause. Could
someone
>     please take
>     a look?
>
>     Thanks
>     Vojta
>
>     [1]
>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull>
>   
 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull>>
>     [2]
>

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log

<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log>
>   
 
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log

<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log>>
>     [3]
>
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/
<https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/>
>   
 <https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/paramet

[ovirt-devel] Re: OST fails with NPE in VmDeviceUtils.updateUsbSlots

2021-01-28 Thread Marcin Sobczyk

Hi,

On 1/28/21 9:43 AM, Arik Hadas wrote:

Hi,
Seems like our changes to bios type handing lead to that.
Interestingly, OST passed on the patches..

Can you please provide more info on the verification process?

Regards, Marcin


Anyway, we look into it.
Thanks for bringing this to our attention!

On Thu, Jan 28, 2021 at 12:13 AM Vojtech Juranek > wrote:


Hi,
OST fails constantly in test_check_snapshot_with_memory [1] with
NPE  in
VmDeviceUtils.updateUsbSlots [2]. Build with any additional
changes (custom
repo) is on [3].

Unfortunately, I wasn't able to find the root cause. Could someone
please take
a look?

Thanks
Vojta

[1]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/consoleFull

[2]

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7720/artifact/exported-artifacts/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log


[3]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7718/parameters/

___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/M5LFINFRHR3T56UDVBD53EOTUFPDXOPC/




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6FCPAHKN6ECUTMHPPH37UPPMBDUTPDGL/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K5EAZU2JOYKIFCC2ZV4OEZHHQLINOZBR/


[ovirt-devel] Re: OST on Stream

2021-01-22 Thread Marcin Sobczyk



On 1/22/21 12:00 PM, Marcin Sobczyk wrote:

Hi

On 1/22/21 11:45 AM, Michal Skrivanek wrote:

There seems to be multiple issues really

1) First it fails on “unknown state” of ovirt-engine-notifier service. Seems 
the OST code is not handling service states well and just explodes
2) Either way, the problem is it's stopped. It should be running
3) if you start it manually it gets to the engine-config test and restarts 
engine. It explodes here again, apparently same reason as #1
4) host installation fails - didi’s 
https://gerrit.ovirt.org/#/c/ovirt-engine/+/113101/ is fixing it
5)  verify_engine_notifier fails when we try to stop it. it’s not running due 
to #2 and it explodes anyway due to #1

rest seems to work fine except few more #1 issues, so that’s good…

Marcin, can you please prioritize #1?

Sure, looking...

This is a known issue. Bugs that have been filed for this:

https://bugzilla.redhat.com/show_bug.cgi?id=1908275
https://bugzilla.redhat.com/show_bug.cgi?id=1901449

There's also a workaround made for ansible modules:

https://github.com/ansible/ansible/issues/71528

I actually cannot reproduce this issue on my CentOS 8.3 servers.
I think the problem is that CI agents are based on RHEL 8.2, which has 
the unpatched ansible version.


Regards, Marcin




Martin, any idea about notifier?

Thanks,
michal


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZIBGOENQ5D4KFNGHQOK74BYMAIKFSG3E/


[ovirt-devel] Re: OST on Stream

2021-01-22 Thread Marcin Sobczyk

Hi

On 1/22/21 11:45 AM, Michal Skrivanek wrote:

There seems to be multiple issues really

1) First it fails on “unknown state” of ovirt-engine-notifier service. Seems 
the OST code is not handling service states well and just explodes
2) Either way, the problem is it's stopped. It should be running
3) if you start it manually it gets to the engine-config test and restarts 
engine. It explodes here again, apparently same reason as #1
4) host installation fails - didi’s 
https://gerrit.ovirt.org/#/c/ovirt-engine/+/113101/ is fixing it
5)  verify_engine_notifier fails when we try to stop it. it’s not running due 
to #2 and it explodes anyway due to #1

rest seems to work fine except few more #1 issues, so that’s good…

Marcin, can you please prioritize #1?

Sure, looking...


Martin, any idea about notifier?

Thanks,
michal


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KO35XG5AOAXWL6EL4ADSRR4QEN3SJBQW/


[ovirt-devel] OST design document

2021-01-20 Thread Marcin Sobczyk

Hi All,

I've prepared a google doc that tries to roughly describe the changes 
being made/planned in OST project and the rationale behind them.

Please find it under this link:

https://docs.google.com/document/d/1vY4K5YhI9wYvArJD6pW3poBxyq-geS4pTYU8-R0fXyw/edit?usp=sharing

I'm really bad at working with google docs, so if the link doesn't work 
for you please ping me privately and I'll fix it.


Any feedback is very appreciated.

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KQH2WM4QPZRTRX5L5E4ZPCJDSQZSQ6JL/


[ovirt-devel] Re: [VDSM] Test pass, build fail during cleanup

2021-01-19 Thread Marcin Sobczyk

Hi,

This looks like something wrong with CI agents, in the part which scrubs 
chroot we can find:


[2021-01-19T12:10:44.669Z] Finish: scrub ['chroot']
[2021-01-19T12:10:44.669Z] Traceback (most recent call last):
[2021-01-19T12:10:44.669Z]   File "/dev/fd/6", line 1059, in 
[2021-01-19T12:10:44.669Z] exitStatus = main()
[2021-01-19T12:10:44.669Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.669Z] result = func(*args, **kw)
[2021-01-19T12:10:44.669Z]   File "/dev/fd/6", line 825, in main
[2021-01-19T12:10:44.669Z] result = run_command(options, args, 
config_opts, commands, buildroot, state)
[2021-01-19T12:10:44.669Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.669Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File "/dev/fd/6", line 859, in run_command
[2021-01-19T12:10:44.670Z] commands.scrub(options.scrub)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/backend.py", line 125, in scrub

[2021-01-19T12:10:44.670Z] self.buildroot.delete()
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/buildroot.py", line 869, in 
delete
[2021-01-19T12:10:44.670Z] file_util.rmtree(self.basedir, 
selinux=self.selinux)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 59, in 
rmtree
[2021-01-19T12:10:44.670Z] rmtree(fullname, selinux=selinux, 
exclude=exclude)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 59, in 
rmtree
[2021-01-19T12:10:44.670Z] rmtree(fullname, selinux=selinux, 
exclude=exclude)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 59, in 
rmtree
[2021-01-19T12:10:44.670Z] rmtree(fullname, selinux=selinux, 
exclude=exclude)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 59, in 
rmtree
[2021-01-19T12:10:44.670Z] rmtree(fullname, selinux=selinux, 
exclude=exclude)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 59, in 
rmtree
[2021-01-19T12:10:44.670Z] rmtree(fullname, selinux=selinux, 
exclude=exclude)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/trace_decorator.py", line 
93, in trace

[2021-01-19T12:10:44.670Z] result = func(*args, **kw)
[2021-01-19T12:10:44.670Z]   File 
"/usr/lib/python3.6/site-packages/mockbuild/file_util.py", line 68, in 
rmtree

[2021-01-19T12:10:44.670Z] os.rmdir(path)
[2021-01-19T12:10:44.670Z] OSError: [Errno 16] Device or resource busy: 
'/var/lib/mock/epel-8-x86_64-8e6e57af00668dc4b1252bf1bf4d754c-2645436/root/var/tmp/vdsm-storage/mount.file-512'

[2021-01-19T12:10:44.670Z] Scrub chroot took 10 seconds

Evgheni, could you please take a look at it?

Regards, Marcin


On 1/19/21 1:49 PM, Nir Soffer wrote:

We seem to have an issue in the CI, starting this week.

All tests pass, but creating coverage report fail:

+ generate_combined_coverage_report
+ pushd tests
~/tests ~
+ pwd
/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests
+ ls .coverage-gluster .coverage-lib .coverage-network .coverage-nose
.coverage-storage .coverage-virt
.coverage-gluster
.coverage-lib
.coverage-network
.coverage-nose
.coverage-storage
.coverage-virt
+ python3 -m coverage combine .coverage-gluster .coverage-lib
.coverage-network .coverage-nose .coverage-storage .coverage-virt
No usable data files
Coverage.py warning: Couldn't read data from
'/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/tests/.coverage-gluster':

[ovirt-devel] Re: bz 1915329: [Stream] Add host fails with: Destination /etc/pki/ovirt-engine/requests not writable

2021-01-18 Thread Marcin Sobczyk



On 1/18/21 9:58 AM, Yedidyah Bar David wrote:

On Mon, Jan 18, 2021 at 10:53 AM Martin Perina  wrote:



On Mon, Jan 18, 2021 at 9:08 AM Yedidyah Bar David  wrote:

On Sun, Jan 17, 2021 at 3:11 PM Yedidyah Bar David  wrote:

On Thu, Jan 14, 2021 at 1:41 PM Yedidyah Bar David  wrote:

On Thu, Jan 14, 2021 at 8:35 AM Yedidyah Bar David  wrote:

On Wed, Jan 13, 2021 at 5:34 PM Yedidyah Bar David  wrote:

On Wed, Jan 13, 2021 at 2:48 PM Yedidyah Bar David  wrote:

On Wed, Jan 13, 2021 at 1:57 PM Marcin Sobczyk  wrote:

Hi,

my guess is it's selinux-related.

Unfortunately I can't find any meaningful errors in audit.log in a
scenario where host deployment fails.
However switching selinux to permissive mode before adding hosts makes
the problem go away, so it's probably not an error somewhere in logic.

It's getting weirder: Under strace, it succeeds:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/112948

(Can't see the actual log, as I didn't add '-A', so it was overwritten
on restart...)

After updating it to use '-A' it indeed shows that it worked:

43664 14:16:55.997639 access("/etc/pki/ovirt-engine/requests", W_OK

43664 14:16:55.997695 <... access resumed>) = 0

Weird.

Now ran in parallel 'ci test' for this patch and another one from
master, for comparison:

Again, the same:


https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/14916/

With strace, passed,


https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1883/

Without strace, failed.

Last nightly run that passed [1] used:

ost-images-el8-host-installed-1-202101100446.x86_64
ovirt-engine-appliance-4.4-20210109182828.1.el8.x86_64

Trying now with these - not sure it possible to put specific versions inside
automation/*packages, let's see:

https://gerrit.ovirt.org/c/ovirt-system-tests/+/112977

Indeed, with a fixed ost-images and removing updates, it passes. network suite
failed, but he-basic passed:

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/14920/artifact/ci_build_summary.html

So I am quite certain this is an OS issue. Not sure how we do not see
this in basic-suite.
Perhaps it's related to nested-kvm, or to load/slowness caused by that? Weird.

when this fails, we do not collect all engine's /var/log, only
messages and ovirt-engine/ .
So it's not easy to get a list of the packages that were updated.

Pushed now:

https://github.com/oVirt/ovirt-ansible-collection/pull/202

to get all of engine's /var/log, and ran manual HE job with it:

https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7680/

This one I accidentally ran with the wrong repo, then ran another one
with the correct repo [1],
But:

1. The repo wasn't used. Emailed about this a separate thread: "manual
job does not use custom repo"

2. It passed! Being what seems like a heisenbug, I understand why when
you run it under strace it
works differently. But even if you just intend to collect more logs it
also causes it to behave
differently? :-) This does not mean that "problem solved" - latest
nightly run [2] did fail with
the same error.

Status:

1. he-basic-suite is still failing.

2. Patch to collect all of /var/log from the engine merged.

Dana, can you please update? Did you have any progress?

IMO it's an OS bug. If Marcin says it's an selinux issue, I do not argue :-).
So, how do we continue?


Switching to CentOS Stream development/testing is a big effort, I'm not sure we 
can do this and still deliver all the RFEs/bugs planned for 4.4.5 ...

+1

IMO we should now revert appliance and node to CentOS 8.3, and then
continue the discussion.
Having he-basic-suite broken for a week is too much.
+1 The testing infrastructure for Stream is here, but if it doesn't work 
yet than let's stick to the plan and focus on 8.3.







[1] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7681/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1887/




[1] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/1879/

--
Didi



--
Didi



--
Didi



--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MHE2YPXUYGDX6IBS265BPXEXLGCEOZWI/


[ovirt-devel] Re: bz 1915329: [Stream] Add host fails with: Destination /etc/pki/ovirt-engine/requests not writable

2021-01-13 Thread Marcin Sobczyk

Hi,

my guess is it's selinux-related.

Unfortunately I can't find any meaningful errors in audit.log in a 
scenario where host deployment fails.
However switching selinux to permissive mode before adding hosts makes 
the problem go away, so it's probably not an error somewhere in logic.


Regards, Marcin

On 1/12/21 1:54 PM, Yedidyah Bar David wrote:

Hi all,

Now filed $Subject [1].

Any clues are most welcome. Thanks.

Best regards,

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1915329

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/B5CQ4XYUIMA5AOTTRF77YEVPCEDPIT4Z/


[ovirt-devel] Re: OST UI tests are skipped during check patch for OST repo

2020-12-21 Thread Marcin Sobczyk



On 12/21/20 9:08 AM, Lucia Jelinkova wrote:

Hi all,

I've recently refactored some of the UI tests in the OST basic suite 
and added a new test for integration with Grafana [1]. I pushed my 
patches to Gerrit, however, I am not able to run them because during 
the check patch CI job (almost) all UI tests are skipped [2].


How can I make Jenkins to run them?

Hi Lucia,

unfortunately we don't have that in u/s CI at the moment.

We had to drop all el7 jobs to get rid of all the legacy stuff and 
complete our move to el8 and py3.
In el7 we had docker with its socket exposed to mock, so we could use 
containers to run the selenium grid.
In el8 there is no docker, we have podman. Podman on it's own doesn't 
work in mock chroot.
With CentOS 8.3 there was some hope, since the version of podman 
provided has an experimental socket support,

meaning we could use it the same way like we used docker.

I tried it out, but even though the socket itself works, there are 
limitations on the network implementation side.
To have the complete setup working, we need the browsers running inside 
containers to be able
to access engine's http server. This is only possible when using 
slirp4netns networking backend.
Unfortunately with this version of podman there is no way to choose 
networking backend for pods.


For now, my advice would be to try running OST on your own machine.
If it has 16GBs of RAM and ~12GBs of free space on /, then you should be 
good.
There's a thread on devel mailing list called "How to set up a (rh)el8 
machine for running OST"
where you can find instructions on how to prepare your machine for 
running OST.


I'm keeping my eye on the podman situation and will let you know if we 
have something working.


Regards, Marcin



Regards,

Lucia

1: https://gerrit.ovirt.org/#/c/ovirt-system-tests/+/112737/ 

2: 
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/14501/testReport/basic-suite-master.test-scenarios/test_100_basic_ui_sanity/ 
. 




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/IRJ7Q5VJTKHMXLABURA76YMXAMDL347J/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A6YJJN5KCONHIUO25EYC45JVLOSG56SE/


[ovirt-devel] Re: oVirt appliance on master is now based on CentOS Stream

2020-12-16 Thread Marcin Sobczyk



On 12/16/20 11:22 AM, Yedidyah Bar David wrote:
On Wed, Dec 16, 2020 at 12:11 PM Yuval Turgeman > wrote:


Wow, very nice !!

On Wed, Dec 16, 2020 at 12:07 PM Sandro Bonazzola
mailto:sbona...@redhat.com>> wrote:

Hi,
just an heads up that now oVirt Appliance is based on CentOS
Stream, targeting 4.4.5 to be released.
The build should be available on master repos tomorrow morning.


What about ovirt-node and ost-images?

We build el8stream-based ost images nightly.
Currently the u/s nightly job is broken again, so the ones you will find 
on templates.ovirt.org are a bit outdated.

The d/s ost images repo is up to date AFAICS.

To run OST on el8stream you should:

export OST_IMAGES_DISTRO=el8stream
./run_suite.sh basic-suite-master

Regards, Marcin



Thanks,
--
Didi

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FQ7VAXLSP7LAXKBXBKO26YZZ4E4OJHME/


[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-15 Thread Marcin Sobczyk



On 12/3/20 4:15 PM, Marcin Sobczyk wrote:

On 12/3/20 1:02 PM, Ehud Yonasi wrote:

Correct. It is now fixed with the manual ost job.

Tried running a manual job a moment ago, but ended up with
docker-related error [4].

[4]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7604/console#L1,285

Hi Ehud, any updates on this? Manual OST runs are still broken.

Regards, Marcin




On Thu, Dec 3, 2020 at 1:54 PM Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:



 On 12/3/20 12:33 PM, Ehud Yonasi wrote:
 > Hey,
 >
 > If there are no more OST jobs that will use el7, Evgheni can start
 > rebuilding those nodes to el8.
 There shouldn't be.

 > Jenkins won't respawn the ost container anymore.
 So right now if I try to run a manual OST job it still seems to be
 using
 containers.
 Isn't [3] supposed to be changed?

 [3]
 
https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98
 
<https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98>

 >
 > On Thu, Dec 3, 2020 at 1:28 PM Marcin Sobczyk
 mailto:msobc...@redhat.com>
 > <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
 >
     >
 >
 >     On 12/1/20 8:18 PM, Marcin Sobczyk wrote:
 >     > Hi,
 >     >
 >     > the patch that removed all el7 OST runs has been merged.
 >     >
 >     > Regards, Marcin
 >     >
 >     > On 11/26/20 12:04 PM, Anton Marchukov wrote:
 >     >> Ehud will switch it back to baremetals on the next week. The
 >     same will
 >     >> have to be done for OST CI itself. After that we can
 rebuild unused
 >     >> openshift nodes back to CI baremetals. And also convert el7
 >     baremetals
 >     >> to el8 ones (I guess we will leave few and watch the load
 on them).
 >     >> Please expect some capacity drop during this time.
     >     Anton, Ehud, any progress on this?
 >
 >     Thanks, Marcin
 >
 >     >>
 >     >> On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk
 >     mailto:msobc...@redhat.com>
 <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
 >     >> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>
 <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>> wrote:
 >     >>
 >     >>      Hi,
 >     >>
 >     >>      since all the important suites are working on el8
 already,
 >     we're
 >     >>      planning to drop all el7 OST runs with [1] very soon.
 >     >>
 >     >>      This means we can finally say goodbye to py2 and other
 >     legacy stuff!
 >     >>
 >     >>      We still need to move manual OST runs not to use
 >     containers for
 >     >>      that to
 >     >>      happen.
 >     >>      This effort is tracked here [2].
 >     >>
 >     >>      Regards, Marcin
 >     >>
 >     >>      [1] https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>
 >     <https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>>
 <https://gerrit.ovirt.org/112378 <https://gerrit.ovirt.org/112378>
 >     <https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>>>
 >     >>      [2] https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>
 >     <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>>
 >     >>      <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>
 >     <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>>>
 >     >>
 >     >>
 >     >>
 >     >> --
 >     >> Anton Marchukov
 >     >> Associate Manager - RHV DevOps - Red Hat
 >



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5QJ3DN5LOHJTZW5LYLXVS2QMSKGGVJ5T/


[ovirt-devel] Re: ovirt-engine CI check-patch failed

2020-12-15 Thread Marcin Sobczyk

Hi,

On 12/15/20 1:12 PM, Eyal Shenitzky wrote:

Hi,

I see the following error on the CI -

*14:03:36* + 
usrc=/home/jenkins/workspace/ovirt-engine_standard-check-patch/jenkins/stdci_tools/usrc.


*14:03:36* + [[ -x 
/home/jenkins/workspace/ovirt-engine_standard-check-patch/jenkins/stdci_tools/usrc.py 
]


*14:03:36* + 
/home/jenkins/workspace/ovirt-engine_standard-check-patch/jenkins/stdci_tools/usrc.py 
--log -d get


*14:03:36* /usr/bin/env: ‘python’: No such file or directory

See - 
https://jenkins.ovirt.org/job/ovirt-engine_standard-check-patch/9286/console 



Can someone please take a look?

There's already an infra ticket for this - 
https://ovirt-jira.atlassian.net/browse/OVIRT-3073


Regards, Marcin




Regards,
Eyal Shenitzky

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XLSC5FPI6WZSFX6YIZQJRZEQOET6SAE4/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EULIC2TQISZ26WQFJLGFF64DSXF3BJPP/


[ovirt-devel] OST failures on dnf_upgrade

2020-12-14 Thread Marcin Sobczyk

Hi All,

there's a known issue with OST failing right now during deployment on 
'dnf_upgrade'.
The problem is that 'mdevctl' package is missing in CentOS repos and new 
libvirt versions
require it. While we wait for an official fix, we're rebuilding 
ost-images with the package
included from a copr build. Hopefully OST will work in 1-2 hours. Sorry 
for the inconvenience.


Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7G6VQMVTFX4BE46KDJXOXRWUCEB5OI6D/


[ovirt-devel] OST images moved to CentOS 8.3

2020-12-09 Thread Marcin Sobczyk

Hi,

the prebuilt OST images have been migrated to CentOS 8.3.
If you find any issues with the new images, please let me know.

There is a known issue of 'test_098_ovirt_provider_ovn' failing, but 
it's not related

to CentOS 8.3 and should be fixed as soon as we get a new engine build.

We also have a new layer of 'ost-images-el8-he-installed' that contains 
'ovirt-hosted-engine-appliance'

installed and is beneficial to HE suites.

Finally, somehow related to yesterday's announcement, fyi we already 
have nightly-built
OST images based on CentOS Stream since quite some time. For those of 
you interested

in trying them out it should be as simple as:

dnf install ost-images-el8stream-engine-installed 
ost-images-el8stream-host-installed

export OST_IMAGES_DISTRO=el8stream
./run_suite.sh basic-suite-master

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZXJGAPP2GWLJF3UVNW7D56WNIV5RT4VD/


[ovirt-devel] Re: Problems building CentOS 8.3-based engine image for OST

2020-12-09 Thread Marcin Sobczyk



On 12/9/20 8:46 AM, Sandro Bonazzola wrote:



Il giorno mar 8 dic 2020 alle ore 10:08 Marcin Sobczyk 
mailto:msobc...@redhat.com>> ha scritto:




On 12/7/20 5:12 PM, Marcin Sobczyk wrote:
>
> On 12/7/20 4:31 PM, Marcin Sobczyk wrote:
>> On 12/7/20 4:15 PM, Michal Skrivanek wrote:
>>>> On 7 Dec 2020, at 16:06, Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:
>>>>
>>>> Hi All,
>>>>
>>>> since CentOS 8.3 is out, I'm trying to build a new base image
for OST, but there are problems on the engine side.
>>>> The provisioning script we use to build the engine VM is here
[1].
>>>>
>>>> The build ends with errors:
>>>>
>>>> Error: Problems in request:
>>>> missing groups or modules: javapackages-tools
>>> does it no longer exist or not built yet?
>> Found a module called 'javapackages-runtime'.
>> When I enabled it we're left with "Problem 1" only:
>>
>> Error:
>>     Problem: cannot install the best candidate for the job
>>      - nothing provides apache-commons-compress needed by
>>
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>>      - nothing provides apache-commons-jxpath needed by
>>
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>> (try to add '--skip-broken' to skip uninstallable packages or
'--nobest'
>> to use not only best candidate packages)
> After discussing offline with Artur it turned out that the
> 'javapackages-tools' comes from 'PowerTools' repo.
> We enable it in an RPM script during ovirt release RPM installation:
>
>   if [ -f /etc/yum.repos.d/CentOS-PowerTools.repo ] ; then
>   sed -i "s:enabled=0:enabled=1:"
> /etc/yum.repos.d/CentOS-PowerTools.repo
>   fi
>
> but it seems the name of the repofile in 8.3 has changed to
> 'CentOS-Linux-PowerTools.repo'.
> After enabling 'PowerTools' and 'javapackages-tools' the
installation
> went smoothly.
Lev, are you able to modify the release RPM so it handles both cases?


Lev is on PTO today, if this needs urgent fix and can't wait till 
tomorrow please let me know.

We have a simple workaround [2], so no rush.

[2] 
https://gerrit.ovirt.org/#/c/ost-images/+/112544/1/el8-provision-engine.sh.in



Regards, Marcin

>
>
>>
>>>> Last metadata expiration check: 0:00:02 ago on Mon Dec  7
16:00:56 2020.
>>>> Error:
>>>>     Problem 1: cannot install the best candidate for the job
>>>>      - nothing provides apache-commons-compress needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>>>>      - nothing provides apache-commons-jxpath needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
>>>>     Problem 2: package
ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch
requires ovirt-engine-extension-aaa-ldap =
1.4.3-0.289.202010220206.el8, but none of the providers can be
installed
>>>>      - package
ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch
requires slf4j-jdk14, but none of the providers can be installed
>>>>      - conflicting requests
>>>>      - package
slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is
filtered out by modular filtering
>>> or perhaps it just got renamed (that’s what it should if
that’s the case)
>>>
>>>> Please advise.
>>>>
>>>> Thanks, Marcin
>>>>
>>>> [1]

https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD

<https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD>
>>>>



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QOVDJAFTYYVPGFXUOXPEK5AHSYFW7WVD/


[ovirt-devel] Re: [rhev-devel] ost check patch

2020-12-08 Thread Marcin Sobczyk



On 12/8/20 1:05 PM, Ehud Yonasi wrote:
We are working now to upgrade templates server to 8.3, and will update 
when it's done.
Retriggered the pipeline on the templates server and we're past the 
point that failed before, so I think we're home. Thanks!

I think we can proceed with upgrading the other agents.



On Tue, Dec 8, 2020 at 1:37 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:




On 12/8/20 10:31 AM, Galit Rosenthal wrote:
> Hi All,
>
> We are working on fixing the issue.
> Once it will be fixed an updated mail will be sent.
So it's worse than I thought unfortunately.
The problem occurs even when trying to rebuild ost-images with simple
'virt-install' [3].
Most probably there's a breakage between CentOS 8.3 userspace we get
with mock vs CentOS 8.2 libvirtd we run on CI agents.
If that's true we need to upgrade all our el8 agents in CI to 8.3
ASAP.
I'd start with the templates.ovirt.org
<http://templates.ovirt.org> server to see if I can rebuild
ost-images.

Regards, Marcin

>
> Regards,
> Galit
>
> On Tue, Dec 8, 2020 at 11:09 AM Eitan Raviv mailto:era...@redhat.com>
> <mailto:era...@redhat.com <mailto:era...@redhat.com>>> wrote:
>
>     Hi,
>     Since yesterday check patch jobs [1][2] are failing for me on
>
>     emulator not found...
>
>     kvm executable not found
>
>     Any hint appreciated.
>
>     Thanks
>
>     [1]
>

https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14274/

<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14274/>
>   
 
<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14274/

<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14274/>>
>     [2]
>

https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14263

<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14263>
>   
 
<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14263

<https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14263>>
>
>
[3]

https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/1636/pipeline#step-189-log-287

<https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/1636/pipeline#step-189-log-287>

>
> --
>
> Galit Rosenthal
>
> SOFTWARE ENGINEER
>
> Red Hat<https://www.redhat.com/ <https://www.redhat.com/>>
>
> ga...@redhat.com <mailto:ga...@redhat.com>
<mailto:ga...@redhat.com <mailto:ga...@redhat.com>> T: 972-9-7692230
> 
>
> <https://www.redhat.com/ <https://www.redhat.com/>>
>


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AKN7F22JCBEBRTEP5TZMENTHUEKQPAGR/


[ovirt-devel] Re: [rhev-devel] ost check patch

2020-12-08 Thread Marcin Sobczyk



On 12/8/20 10:31 AM, Galit Rosenthal wrote:

Hi All,

We are working on fixing the issue.
Once it will be fixed an updated mail will be sent.

So it's worse than I thought unfortunately.
The problem occurs even when trying to rebuild ost-images with simple 
'virt-install' [3].
Most probably there's a breakage between CentOS 8.3 userspace we get 
with mock vs CentOS 8.2 libvirtd we run on CI agents.

If that's true we need to upgrade all our el8 agents in CI to 8.3 ASAP.
I'd start with the templates.ovirt.org server to see if I can rebuild 
ost-images.


Regards, Marcin



Regards,
Galit

On Tue, Dec 8, 2020 at 11:09 AM Eitan Raviv > wrote:


Hi,
Since yesterday check patch jobs [1][2] are failing for me on

emulator not found...

kvm executable not found

Any hint appreciated.

Thanks

[1]

https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14274/


[2]

https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_standard-check-patch/detail/ovirt-system-tests_standard-check-patch/14263




[3] 
https://jenkins.ovirt.org/blue/organizations/jenkins/standard-manual-runner/detail/standard-manual-runner/1636/pipeline#step-189-log-287




--

Galit Rosenthal

SOFTWARE ENGINEER

Red Hat

ga...@redhat.com  T: 972-9-7692230 






___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QYOPGJ7T2DK6UXO4TVPA4TIHSG4IRNX2/


[ovirt-devel] Re: Problems building CentOS 8.3-based engine image for OST

2020-12-08 Thread Marcin Sobczyk



On 12/7/20 5:12 PM, Marcin Sobczyk wrote:


On 12/7/20 4:31 PM, Marcin Sobczyk wrote:

On 12/7/20 4:15 PM, Michal Skrivanek wrote:

On 7 Dec 2020, at 16:06, Marcin Sobczyk  wrote:

Hi All,

since CentOS 8.3 is out, I'm trying to build a new base image for OST, but 
there are problems on the engine side.
The provisioning script we use to build the engine VM is here [1].

The build ends with errors:

Error: Problems in request:
missing groups or modules: javapackages-tools

does it no longer exist or not built yet?

Found a module called 'javapackages-runtime'.
When I enabled it we're left with "Problem 1" only:

Error:
    Problem: cannot install the best candidate for the job
     - nothing provides apache-commons-compress needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
     - nothing provides apache-commons-jxpath needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest'
to use not only best candidate packages)

After discussing offline with Artur it turned out that the
'javapackages-tools' comes from 'PowerTools' repo.
We enable it in an RPM script during ovirt release RPM installation:

      if [ -f /etc/yum.repos.d/CentOS-PowerTools.repo ] ; then
      sed -i "s:enabled=0:enabled=1:"
/etc/yum.repos.d/CentOS-PowerTools.repo
      fi

but it seems the name of the repofile in 8.3 has changed to
'CentOS-Linux-PowerTools.repo'.
After enabling 'PowerTools' and 'javapackages-tools' the installation
went smoothly.

Lev, are you able to modify the release RPM so it handles both cases?

Regards, Marcin







Last metadata expiration check: 0:00:02 ago on Mon Dec  7 16:00:56 2020.
Error:
Problem 1: cannot install the best candidate for the job
 - nothing provides apache-commons-compress needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
 - nothing provides apache-commons-jxpath needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
Problem 2: package 
ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch 
requires ovirt-engine-extension-aaa-ldap = 1.4.3-0.289.202010220206.el8, but 
none of the providers can be installed
 - package 
ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch requires 
slf4j-jdk14, but none of the providers can be installed
 - conflicting requests
 - package slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is 
filtered out by modular filtering

or perhaps it just got renamed (that’s what it should if that’s the case)


Please advise.

Thanks, Marcin

[1] 
https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QYMGJCQFFG456K2X6G4YKVATGSX2P73T/


[ovirt-devel] Re: Problems building CentOS 8.3-based engine image for OST

2020-12-07 Thread Marcin Sobczyk



On 12/7/20 4:31 PM, Marcin Sobczyk wrote:


On 12/7/20 4:15 PM, Michal Skrivanek wrote:

On 7 Dec 2020, at 16:06, Marcin Sobczyk  wrote:

Hi All,

since CentOS 8.3 is out, I'm trying to build a new base image for OST, but 
there are problems on the engine side.
The provisioning script we use to build the engine VM is here [1].

The build ends with errors:

Error: Problems in request:
missing groups or modules: javapackages-tools

does it no longer exist or not built yet?

Found a module called 'javapackages-runtime'.
When I enabled it we're left with "Problem 1" only:

Error:
   Problem: cannot install the best candidate for the job
    - nothing provides apache-commons-compress needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
    - nothing provides apache-commons-jxpath needed by
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest'
to use not only best candidate packages)
After discussing offline with Artur it turned out that the 
'javapackages-tools' comes from 'PowerTools' repo.

We enable it in an RPM script during ovirt release RPM installation:

    if [ -f /etc/yum.repos.d/CentOS-PowerTools.repo ] ; then
    sed -i "s:enabled=0:enabled=1:" 
/etc/yum.repos.d/CentOS-PowerTools.repo

    fi

but it seems the name of the repofile in 8.3 has changed to 
'CentOS-Linux-PowerTools.repo'.
After enabling 'PowerTools' and 'javapackages-tools' the installation 
went smoothly.







Last metadata expiration check: 0:00:02 ago on Mon Dec  7 16:00:56 2020.
Error:
   Problem 1: cannot install the best candidate for the job
- nothing provides apache-commons-compress needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
- nothing provides apache-commons-jxpath needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
   Problem 2: package 
ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch 
requires ovirt-engine-extension-aaa-ldap = 1.4.3-0.289.202010220206.el8, but 
none of the providers can be installed
- package 
ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch requires 
slf4j-jdk14, but none of the providers can be installed
- conflicting requests
- package slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is 
filtered out by modular filtering

or perhaps it just got renamed (that’s what it should if that’s the case)


Please advise.

Thanks, Marcin

[1] 
https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AK4RLQFV7RCIMXRFJWEJGDUJPALEKQMX/


[ovirt-devel] Re: Problems building CentOS 8.3-based engine image for OST

2020-12-07 Thread Marcin Sobczyk



On 12/7/20 4:15 PM, Michal Skrivanek wrote:



On 7 Dec 2020, at 16:06, Marcin Sobczyk  wrote:

Hi All,

since CentOS 8.3 is out, I'm trying to build a new base image for OST, but 
there are problems on the engine side.
The provisioning script we use to build the engine VM is here [1].

The build ends with errors:

Error: Problems in request:
missing groups or modules: javapackages-tools

does it no longer exist or not built yet?

Found a module called 'javapackages-runtime'.
When I enabled it we're left with "Problem 1" only:

Error:
 Problem: cannot install the best candidate for the job
  - nothing provides apache-commons-compress needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
  - nothing provides apache-commons-jxpath needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' 
to use not only best candidate packages)






Last metadata expiration check: 0:00:02 ago on Mon Dec  7 16:00:56 2020.
Error:
  Problem 1: cannot install the best candidate for the job
   - nothing provides apache-commons-compress needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
   - nothing provides apache-commons-jxpath needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
  Problem 2: package 
ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch 
requires ovirt-engine-extension-aaa-ldap = 1.4.3-0.289.202010220206.el8, but 
none of the providers can be installed
   - package 
ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch requires 
slf4j-jdk14, but none of the providers can be installed
   - conflicting requests
   - package slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is 
filtered out by modular filtering

or perhaps it just got renamed (that’s what it should if that’s the case)


Please advise.

Thanks, Marcin

[1] 
https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/KHOJG4PTMRDTVDCX74GVL4RWCZDS5VQA/


[ovirt-devel] Problems building CentOS 8.3-based engine image for OST

2020-12-07 Thread Marcin Sobczyk

Hi All,

since CentOS 8.3 is out, I'm trying to build a new base image for OST, 
but there are problems on the engine side.

The provisioning script we use to build the engine VM is here [1].

The build ends with errors:

Error: Problems in request:
missing groups or modules: javapackages-tools
Last metadata expiration check: 0:00:02 ago on Mon Dec  7 16:00:56 2020.
Error:
 Problem 1: cannot install the best candidate for the job
  - nothing provides apache-commons-compress needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
  - nothing provides apache-commons-jxpath needed by 
ovirt-engine-4.4.4.4-0.0.master.20201206151430.git1a096b0d4e7.el8.noarch
 Problem 2: package 
ovirt-engine-extension-aaa-ldap-setup-1.4.3-0.289.202010220206.el8.noarch 
requires ovirt-engine-extension-aaa-ldap = 1.4.3-0.289.202010220206.el8, 
but none of the providers can be installed
  - package 
ovirt-engine-extension-aaa-ldap-1.4.3-0.289.202010220206.el8.noarch 
requires slf4j-jdk14, but none of the providers can be installed

  - conflicting requests
  - package slf4j-jdk14-1.7.25-4.module_el8.3.0+454+67dccca4.noarch is 
filtered out by modular filtering


Please advise.

Thanks, Marcin

[1] 
https://gerrit.ovirt.org/gitweb?p=ost-images.git;a=blob_plain;f=el8-provision-engine.sh.in;hb=HEAD

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YC3NNGM3EFH5C4QTR7DMF3KIMJLIYOEQ/


[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-07 Thread Marcin Sobczyk



On 12/7/20 11:00 AM, Yedidyah Bar David wrote:

On Mon, Dec 7, 2020 at 11:48 AM Marcin Sobczyk  wrote:



On 12/7/20 8:19 AM, Yedidyah Bar David wrote:

On Thu, Dec 3, 2020 at 5:16 PM Marcin Sobczyk  wrote:

On 12/3/20 1:02 PM, Ehud Yonasi wrote:

Correct. It is now fixed with the manual ost job.

Tried running a manual job a moment ago, but ended up with
docker-related error [4].

Also, are the nightly runs [1] running in el8? Half of them still get
stuck on 'Create local VM' and timeout, which usually used to happen
on el7.

AFAICS they're still using el7 containers:

02:55:24 Building remotely on openshift-integ-tests-container-wrrzs (el7
integ-tests-container) in workspace
/home/jenkins/agent/workspace/ovirt-system-tests_he-basic-suite-master

but I think fixing the manual runs is much more important right now.

IMO both are important! ;-)

We had he-basic-suite broken for more than 50% of the time, last year.
If we let it remain broken, we risk not noticing real bugs/regressions.
Of course, I just meant we're actively working and running he suite 
regularly

right now, so manual runs are a higher priority ATM.



I also guess it's a simple fix, no? I think something like:

https://gerrit.ovirt.org/c/jenkins/+/112527

(I'd change all el7 to el8 there, except for 4.3 suites, but perhaps better
split and do this later).
Probably not - el8 chroot running on el7 kernel won't work, so we'd need 
to do

node filtering, but I'll leave that to Ehud :)




Regards, Marcin


Thanks,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/


[4]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7604/console#L1,285


On Thu, Dec 3, 2020 at 1:54 PM Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:



  On 12/3/20 12:33 PM, Ehud Yonasi wrote:
  > Hey,
  >
  > If there are no more OST jobs that will use el7, Evgheni can start
  > rebuilding those nodes to el8.
  There shouldn't be.

  > Jenkins won't respawn the ost container anymore.
  So right now if I try to run a manual OST job it still seems to be
  using
  containers.
  Isn't [3] supposed to be changed?

  [3]
  
https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98
  
<https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98>

  >
  > On Thu, Dec 3, 2020 at 1:28 PM Marcin Sobczyk
  mailto:msobc...@redhat.com>
  > <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
  >
  >
  >
  > On 12/1/20 8:18 PM, Marcin Sobczyk wrote:
  > > Hi,
  > >
  > > the patch that removed all el7 OST runs has been merged.
  > >
  > > Regards, Marcin
  > >
  > > On 11/26/20 12:04 PM, Anton Marchukov wrote:
  > >> Ehud will switch it back to baremetals on the next week. The
  > same will
  > >> have to be done for OST CI itself. After that we can
  rebuild unused
  > >> openshift nodes back to CI baremetals. And also convert el7
  > baremetals
  > >> to el8 ones (I guess we will leave few and watch the load
  on them).
  > >> Please expect some capacity drop during this time.
  > Anton, Ehud, any progress on this?
  >
  > Thanks, Marcin
  >
  > >>
  > >> On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk
  > mailto:msobc...@redhat.com>
  <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
  > >> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>
  <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>> wrote:
  > >>
  > >>  Hi,
  > >>
  > >>  since all the important suites are working on el8
  already,
  > we're
  > >>  planning to drop all el7 OST runs with [1] very soon.
  > >>
  > >>  This means we can finally say goodbye to py2 and other
  > legacy stuff!
  > >>
  > >>  We still need to move manual OST runs not to use
  > containers for
  > >>  that to
  > >>  happen.
  > >>  This effort is tracked here [2].
  > >>
  > >>  Regards, Marcin
  > >>
  > >>  [1] https://gerrit.ovirt.org/112378
  <https://gerrit.ovirt.org/112378>
  > <https://gerrit.ovirt.org/112378
  <https://ger

[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-07 Thread Marcin Sobczyk



On 12/7/20 8:19 AM, Yedidyah Bar David wrote:

On Thu, Dec 3, 2020 at 5:16 PM Marcin Sobczyk  wrote:


On 12/3/20 1:02 PM, Ehud Yonasi wrote:

Correct. It is now fixed with the manual ost job.

Tried running a manual job a moment ago, but ended up with
docker-related error [4].

Also, are the nightly runs [1] running in el8? Half of them still get
stuck on 'Create local VM' and timeout, which usually used to happen
on el7.

AFAICS they're still using el7 containers:

02:55:24 Building remotely on openshift-integ-tests-container-wrrzs (el7 
integ-tests-container) in workspace 
/home/jenkins/agent/workspace/ovirt-system-tests_he-basic-suite-master


but I think fixing the manual runs is much more important right now.

Regards, Marcin



Thanks,

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_he-basic-suite-master/


[4]
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7604/console#L1,285


On Thu, Dec 3, 2020 at 1:54 PM Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:



 On 12/3/20 12:33 PM, Ehud Yonasi wrote:
 > Hey,
 >
 > If there are no more OST jobs that will use el7, Evgheni can start
 > rebuilding those nodes to el8.
 There shouldn't be.

 > Jenkins won't respawn the ost container anymore.
 So right now if I try to run a manual OST job it still seems to be
 using
 containers.
 Isn't [3] supposed to be changed?

 [3]
 
https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98
 
<https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98>

 >
 > On Thu, Dec 3, 2020 at 1:28 PM Marcin Sobczyk
 mailto:msobc...@redhat.com>
 > <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
 >
     >
 >
 > On 12/1/20 8:18 PM, Marcin Sobczyk wrote:
 > > Hi,
 > >
 > > the patch that removed all el7 OST runs has been merged.
 > >
 > > Regards, Marcin
 > >
 > > On 11/26/20 12:04 PM, Anton Marchukov wrote:
 > >> Ehud will switch it back to baremetals on the next week. The
 > same will
 > >> have to be done for OST CI itself. After that we can
 rebuild unused
 > >> openshift nodes back to CI baremetals. And also convert el7
 > baremetals
 > >> to el8 ones (I guess we will leave few and watch the load
 on them).
 > >> Please expect some capacity drop during this time.
     > Anton, Ehud, any progress on this?
 >
 > Thanks, Marcin
 >
 > >>
 > >> On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk
 > mailto:msobc...@redhat.com>
 <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
 > >> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>
 <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>> wrote:
 > >>
 > >>  Hi,
 > >>
 > >>  since all the important suites are working on el8
 already,
 > we're
 > >>  planning to drop all el7 OST runs with [1] very soon.
 > >>
 > >>  This means we can finally say goodbye to py2 and other
 > legacy stuff!
 > >>
 > >>  We still need to move manual OST runs not to use
 > containers for
 > >>  that to
 > >>  happen.
 > >>  This effort is tracked here [2].
 > >>
 > >>  Regards, Marcin
 > >>
 > >>  [1] https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>
 > <https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>>
 <https://gerrit.ovirt.org/112378 <https://gerrit.ovirt.org/112378>
 > <https://gerrit.ovirt.org/112378
 <https://gerrit.ovirt.org/112378>>>
 > >>  [2] https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>
 > <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>>
 > >>  <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>
 > <https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>>>
 > >>
 > &g

[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-03 Thread Marcin Sobczyk


On 12/3/20 1:02 PM, Ehud Yonasi wrote:

Correct. It is now fixed with the manual ost job.
Tried running a manual job a moment ago, but ended up with 
docker-related error [4].


[4] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_manual/7604/console#L1,285




On Thu, Dec 3, 2020 at 1:54 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:




On 12/3/20 12:33 PM, Ehud Yonasi wrote:
> Hey,
>
> If there are no more OST jobs that will use el7, Evgheni can start
> rebuilding those nodes to el8.
There shouldn't be.

> Jenkins won't respawn the ost container anymore.
So right now if I try to run a manual OST job it still seems to be
using
containers.
Isn't [3] supposed to be changed?

[3]

https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98

<https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98>

>
> On Thu, Dec 3, 2020 at 1:28 PM Marcin Sobczyk
mailto:msobc...@redhat.com>
> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
>
>
>
>     On 12/1/20 8:18 PM, Marcin Sobczyk wrote:
>     > Hi,
>     >
>     > the patch that removed all el7 OST runs has been merged.
>     >
>     > Regards, Marcin
>     >
>     > On 11/26/20 12:04 PM, Anton Marchukov wrote:
>     >> Ehud will switch it back to baremetals on the next week. The
>     same will
>     >> have to be done for OST CI itself. After that we can
rebuild unused
>     >> openshift nodes back to CI baremetals. And also convert el7
>     baremetals
>     >> to el8 ones (I guess we will leave few and watch the load
on them).
>     >> Please expect some capacity drop during this time.
>     Anton, Ehud, any progress on this?
>
>     Thanks, Marcin
>
>     >>
>     >> On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk
>     mailto:msobc...@redhat.com>
<mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
>     >> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>
<mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>> wrote:
>     >>
>     >>      Hi,
>     >>
>     >>      since all the important suites are working on el8
already,
>     we're
>     >>      planning to drop all el7 OST runs with [1] very soon.
>     >>
>     >>      This means we can finally say goodbye to py2 and other
>     legacy stuff!
>     >>
>     >>      We still need to move manual OST runs not to use
>     containers for
>     >>      that to
>     >>      happen.
>     >>      This effort is tracked here [2].
>     >>
>     >>      Regards, Marcin
>     >>
>     >>      [1] https://gerrit.ovirt.org/112378
<https://gerrit.ovirt.org/112378>
>     <https://gerrit.ovirt.org/112378
<https://gerrit.ovirt.org/112378>>
<https://gerrit.ovirt.org/112378 <https://gerrit.ovirt.org/112378>
>     <https://gerrit.ovirt.org/112378
<https://gerrit.ovirt.org/112378>>>
>     >>      [2] https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>
>     <https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>>
>     >>      <https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>
>     <https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>>>
>     >>
>     >>
>     >>
>     >> --
>     >> Anton Marchukov
>     >> Associate Manager - RHV DevOps - Red Hat
>



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4LSP4EO4V5SCF5VR3ACOCT6JSE7MLASV/


[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-03 Thread Marcin Sobczyk



On 12/3/20 12:33 PM, Ehud Yonasi wrote:

Hey,

If there are no more OST jobs that will use el7, Evgheni can start 
rebuilding those nodes to el8.

There shouldn't be.


Jenkins won't respawn the ost container anymore.
So right now if I try to run a manual OST job it still seems to be using 
containers.

Isn't [3] supposed to be changed?

[3] 
https://github.com/oVirt/jenkins/blob/0a824788eda1e6d6755b451817e3964e1bf14bfc/jobs/confs/projects/ovirt/system-tests.yaml#L98




On Thu, Dec 3, 2020 at 1:28 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:




On 12/1/20 8:18 PM, Marcin Sobczyk wrote:
> Hi,
>
> the patch that removed all el7 OST runs has been merged.
>
> Regards, Marcin
>
> On 11/26/20 12:04 PM, Anton Marchukov wrote:
>> Ehud will switch it back to baremetals on the next week. The
same will
>> have to be done for OST CI itself. After that we can rebuild unused
>> openshift nodes back to CI baremetals. And also convert el7
baremetals
>> to el8 ones (I guess we will leave few and watch the load on them).
>> Please expect some capacity drop during this time.
Anton, Ehud, any progress on this?

Thanks, Marcin

    >>
>> On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk
mailto:msobc...@redhat.com>
>> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
>>
>>      Hi,
>>
>>      since all the important suites are working on el8 already,
we're
>>      planning to drop all el7 OST runs with [1] very soon.
>>
>>      This means we can finally say goodbye to py2 and other
legacy stuff!
>>
>>      We still need to move manual OST runs not to use
containers for
>>      that to
>>      happen.
>>      This effort is tracked here [2].
>>
>>      Regards, Marcin
>>
>>      [1] https://gerrit.ovirt.org/112378
<https://gerrit.ovirt.org/112378> <https://gerrit.ovirt.org/112378
<https://gerrit.ovirt.org/112378>>
>>      [2] https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>
>>      <https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>>
>>
>>
>>
>> --
>> Anton Marchukov
>> Associate Manager - RHV DevOps - Red Hat



___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6PN5JCCWCBKNDQEOCU6QUT5IMPTFXUSW/


[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-03 Thread Marcin Sobczyk



On 12/1/20 8:18 PM, Marcin Sobczyk wrote:

Hi,

the patch that removed all el7 OST runs has been merged.

Regards, Marcin

On 11/26/20 12:04 PM, Anton Marchukov wrote:

Ehud will switch it back to baremetals on the next week. The same will
have to be done for OST CI itself. After that we can rebuild unused
openshift nodes back to CI baremetals. And also convert el7 baremetals
to el8 ones (I guess we will leave few and watch the load on them).
Please expect some capacity drop during this time.

Anton, Ehud, any progress on this?

Thanks, Marcin



On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:

 Hi,

 since all the important suites are working on el8 already, we're
 planning to drop all el7 OST runs with [1] very soon.

 This means we can finally say goodbye to py2 and other legacy stuff!

 We still need to move manual OST runs not to use containers for
 that to
 happen.
 This effort is tracked here [2].

 Regards, Marcin

 [1] https://gerrit.ovirt.org/112378 <https://gerrit.ovirt.org/112378>
 [2] https://ovirt-jira.atlassian.net/browse/OST-145
 <https://ovirt-jira.atlassian.net/browse/OST-145>



--
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZPGMURABINVFGJ4BGUWJ634FMXRSI4PW/


[ovirt-devel] Re: Dropping all el7 OST runs

2020-12-01 Thread Marcin Sobczyk

Hi,

the patch that removed all el7 OST runs has been merged.

Regards, Marcin

On 11/26/20 12:04 PM, Anton Marchukov wrote:
Ehud will switch it back to baremetals on the next week. The same will 
have to be done for OST CI itself. After that we can rebuild unused 
openshift nodes back to CI baremetals. And also convert el7 baremetals 
to el8 ones (I guess we will leave few and watch the load on them). 
Please expect some capacity drop during this time.


On Thu, Nov 26, 2020 at 12:00 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:


Hi,

since all the important suites are working on el8 already, we're
planning to drop all el7 OST runs with [1] very soon.

This means we can finally say goodbye to py2 and other legacy stuff!

We still need to move manual OST runs not to use containers for
that to
happen.
This effort is tracked here [2].

Regards, Marcin

[1] https://gerrit.ovirt.org/112378 <https://gerrit.ovirt.org/112378>
[2] https://ovirt-jira.atlassian.net/browse/OST-145
<https://ovirt-jira.atlassian.net/browse/OST-145>



--
Anton Marchukov
Associate Manager - RHV DevOps - Red Hat

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Y2N3KWOEYFP46QSGFDXJTADLWQ7VYWLV/


[ovirt-devel] Re: [OST] Error updating ost images - packages with conflicts/broken deps

2020-11-30 Thread Marcin Sobczyk



On 11/30/20 10:14 AM, Marcin Sobczyk wrote:

Hi,

confirming that it's a problem.
Sorry about it, working on a fix right now.

Fixed by https://gerrit.ovirt.org/#/c/ost-images/+/112418/
All the packages are already rebuilt and published, so everything should 
be working now.


Regards, Marcin



Regards, Marcin

On 11/29/20 5:11 PM, Nir Soffer wrote:

Trying dnf update on ost system fails now with:

$ sudo dnf update
...
Error:
   Problem 1: package
ost-images-el8-host-deps-installed-1-202011270303.x86_64 requires
ost-images-el8-upgrade = 1-202011270303, but none of the providers can
be installed
- cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
- cannot install the best update candidate for package
ost-images-el8-upgrade-1-202011270303.x86_64
- cannot install the best update candidate for package
ost-images-el8-host-deps-installed-1-202011270303.x86_64
   Problem 2: package
ost-images-el8-engine-deps-installed-1-202011270303.x86_64 requires
ost-images-el8-upgrade = 1-202011270303, but none of the providers can
be installed
- cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
- package ost-images-el8-engine-installed-1-202011290315.x86_64
requires ost-images-el8-upgrade = 1-202011290315, but none of the
providers can be installed
- cannot install the best update candidate for package
ost-images-el8-engine-installed-1-202011270303.x86_64
- cannot install the best update candidate for package
ost-images-el8-engine-deps-installed-1-202011270303.x86_64
   Problem 3: problem with installed package
ost-images-el8-host-deps-installed-1-202011270303.x86_64
- package ost-images-el8-host-deps-installed-1-202011270303.x86_64
requires ost-images-el8-upgrade = 1-202011270303, but none of the
providers can be installed
- cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
- package ost-images-el8-host-installed-1-202011290315.x86_64
requires ost-images-el8-upgrade = 1-202011290315, but none of the
providers can be installed
- cannot install the best update candidate for package
ost-images-el8-host-installed-1-202011270303.x86_64
(try to add '--allowerasing' to command line to replace conflicting
packages or '--skip-broken' to skip uninstallable packages or
'--nobest' to use not only best candidate packages)

$ sudo dnf update --nobest

works, but complains about:

Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
   ost-images-el8-upgrade  x86_64
1-202011290315  ost-images  15 M
Skipping packages with broken dependencies:
   ost-images-el8-engine-installed x86_64
1-202011290315  ost-images 1.0 G
   ost-images-el8-host-installed   x86_64
1-202011290315  ost-images 518 M

I have not tried to run ost with the updated packages yet.

Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W27X4S4XURTG75PKENF6WYIW2IZWXYAL/


[ovirt-devel] Re: [OST] Error updating ost images - packages with conflicts/broken deps

2020-11-30 Thread Marcin Sobczyk

Hi,

confirming that it's a problem.
Sorry about it, working on a fix right now.

Regards, Marcin

On 11/29/20 5:11 PM, Nir Soffer wrote:

Trying dnf update on ost system fails now with:

$ sudo dnf update
...
Error:
  Problem 1: package
ost-images-el8-host-deps-installed-1-202011270303.x86_64 requires
ost-images-el8-upgrade = 1-202011270303, but none of the providers can
be installed
   - cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
   - cannot install the best update candidate for package
ost-images-el8-upgrade-1-202011270303.x86_64
   - cannot install the best update candidate for package
ost-images-el8-host-deps-installed-1-202011270303.x86_64
  Problem 2: package
ost-images-el8-engine-deps-installed-1-202011270303.x86_64 requires
ost-images-el8-upgrade = 1-202011270303, but none of the providers can
be installed
   - cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
   - package ost-images-el8-engine-installed-1-202011290315.x86_64
requires ost-images-el8-upgrade = 1-202011290315, but none of the
providers can be installed
   - cannot install the best update candidate for package
ost-images-el8-engine-installed-1-202011270303.x86_64
   - cannot install the best update candidate for package
ost-images-el8-engine-deps-installed-1-202011270303.x86_64
  Problem 3: problem with installed package
ost-images-el8-host-deps-installed-1-202011270303.x86_64
   - package ost-images-el8-host-deps-installed-1-202011270303.x86_64
requires ost-images-el8-upgrade = 1-202011270303, but none of the
providers can be installed
   - cannot install both ost-images-el8-upgrade-1-202011290315.x86_64
and ost-images-el8-upgrade-1-202011270303.x86_64
   - package ost-images-el8-host-installed-1-202011290315.x86_64
requires ost-images-el8-upgrade = 1-202011290315, but none of the
providers can be installed
   - cannot install the best update candidate for package
ost-images-el8-host-installed-1-202011270303.x86_64
(try to add '--allowerasing' to command line to replace conflicting
packages or '--skip-broken' to skip uninstallable packages or
'--nobest' to use not only best candidate packages)

$ sudo dnf update --nobest

works, but complains about:

Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
  ost-images-el8-upgrade  x86_64
1-202011290315  ost-images  15 M
Skipping packages with broken dependencies:
  ost-images-el8-engine-installed x86_64
1-202011290315  ost-images 1.0 G
  ost-images-el8-host-installed   x86_64
1-202011290315  ost-images 518 M

I have not tried to run ost with the updated packages yet.

Nir


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/276TFCSNE6DEP4K3FFBUZJOAFYPRFMEX/


[ovirt-devel] Re: Health check endpoint in the engine hanging forever on CheckDBConnection

2020-11-27 Thread Marcin Sobczyk



On 11/27/20 11:24 AM, Martin Perina wrote:

Hi,

the health status is pretty stupid simple call to database:

https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/services/src/main/java/org/ovirt/engine/core/services/HealthStatus.java 
<https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/services/src/main/java/org/ovirt/engine/core/services/HealthStatus.java>
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/CheckDBConnectionQuery.java#L21 
<https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/bll/src/main/java/org/ovirt/engine/core/bll/CheckDBConnectionQuery.java#L21>
https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dal/dbbroker/DbConnectionUtil.java#L33 
<https://github.com/oVirt/ovirt-engine/blob/master/backend/manager/modules/dal/src/main/java/org/ovirt/engine/core/dal/dbbroker/DbConnectionUtil.java#L33>
https://github.com/oVirt/ovirt-engine/blob/master/packaging/dbscripts/common_sp.sql#L421 
<https://github.com/oVirt/ovirt-engine/blob/master/packaging/dbscripts/common_sp.sql#L421>


So it should definitely not hang forever unless there is some serious 
issue in the engine start up or PostgreSQL database. Could you please 
share logs? Especially interesting would server.log and engine.log 
from /var/log/ovirt-engine
Well, after some discussion with Artur and trying some workarounds, the 
problem magically
disappeared on my servers, but there's one OST gating run in CI that 
suffered from the same problem:


https://jenkins.ovirt.org/blue/organizations/jenkins/ovirt-system-tests_gate/detail/ovirt-system-tests_gate/937/pipeline#step-240-log-1226

https://jenkins.ovirt.org/job/ovirt-system-tests_gate/937/artifact/basic-suit-master.el7.x86_64/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/server.log/*view*/

https://jenkins.ovirt.org/job/ovirt-system-tests_gate/937/artifact/basic-suit-master.el7.x86_64/test_logs/basic-suite-master/lago-basic-suite-master-engine/_var_log/ovirt-engine/engine.log/*view*/




Martin


On Thu, Nov 26, 2020 at 4:24 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:


Hi,

I'm trying out (I think) the latest build of ovirt-engine in OST
[1] and
the basic suite
fails when we do engine reconfiguration and then restart the
service [2].
After restarting we wait on the health check endpoint status here [3].
This however ends with a timeout.
I tried running manually:

curl -D - http://engine/ovirt-engine/services/health
<http://engine/ovirt-engine/services/health>

and that command also hangs forever.
In the engine log the last related entry seem to be:

2020-11-26 16:20:15,761+01 DEBUG
[org.ovirt.engine.core.utils.servlet.LocaleFilter] (default
task-7) []
Incoming locale 'en-US'. Filter determined locale to be 'en-US'
2020-11-26 16:20:15,761+01 DEBUG
[org.ovirt.engine.core.services.HealthStatus] (default task-7) []
Health
Status servlet: entry
2020-11-26 16:20:15,761+01 DEBUG
[org.ovirt.engine.core.services.HealthStatus] (default task-7) []
Calling CheckDBConnection query

Has anyone else also encountered that?

Regards, Marcin

[1]
ovirt-engine-4.4.4.3-0.0.master.20201126133903.gitc2c805a2662.el8.noarch
[2]

https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/basic-suite-master/test-scenarios/test_001_initialize_engine.py#L88

<https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/basic-suite-master/test-scenarios/test_001_initialize_engine.py#L88>
[3]

https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/ost_utils/ost_utils/pytest/fixtures/engine.py#L160

<https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/ost_utils/ost_utils/pytest/fixtures/engine.py#L160>



--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WJYHDYPNFD3J7BFPMPTKJXNWNBNZUFGZ/


[ovirt-devel] Health check endpoint in the engine hanging forever on CheckDBConnection

2020-11-26 Thread Marcin Sobczyk

Hi,

I'm trying out (I think) the latest build of ovirt-engine in OST [1] and 
the basic suite

fails when we do engine reconfiguration and then restart the service [2].
After restarting we wait on the health check endpoint status here [3].
This however ends with a timeout.
I tried running manually:

curl -D - http://engine/ovirt-engine/services/health

and that command also hangs forever.
In the engine log the last related entry seem to be:

2020-11-26 16:20:15,761+01 DEBUG 
[org.ovirt.engine.core.utils.servlet.LocaleFilter] (default task-7) [] 
Incoming locale 'en-US'. Filter determined locale to be 'en-US'
2020-11-26 16:20:15,761+01 DEBUG 
[org.ovirt.engine.core.services.HealthStatus] (default task-7) [] Health 
Status servlet: entry
2020-11-26 16:20:15,761+01 DEBUG 
[org.ovirt.engine.core.services.HealthStatus] (default task-7) [] 
Calling CheckDBConnection query


Has anyone else also encountered that?

Regards, Marcin

[1] ovirt-engine-4.4.4.3-0.0.master.20201126133903.gitc2c805a2662.el8.noarch
[2] 
https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/basic-suite-master/test-scenarios/test_001_initialize_engine.py#L88
[3] 
https://github.com/oVirt/ovirt-system-tests/blob/3e2fc267b376a12eda131fa0e9cda2d94e36e2be/ost_utils/ost_utils/pytest/fixtures/engine.py#L160

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6HH2MYIIA7VWE5KFSTTHEIRPJC73YPC2/


[ovirt-devel] Dropping all el7 OST runs

2020-11-26 Thread Marcin Sobczyk

Hi,

since all the important suites are working on el8 already, we're
planning to drop all el7 OST runs with [1] very soon.

This means we can finally say goodbye to py2 and other legacy stuff!

We still need to move manual OST runs not to use containers for that to 
happen.

This effort is tracked here [2].

Regards, Marcin

[1] https://gerrit.ovirt.org/112378
[2] https://ovirt-jira.atlassian.net/browse/OST-145
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UQC5FJSP5OWX4OWRIGMQK2DE4WJSBYWC/


[ovirt-devel] Re: VDSM CI patches stuck in queue for several hours

2020-11-24 Thread Marcin Sobczyk



On 11/24/20 8:12 AM, Ales Musil wrote:

Hi,

most of my patches lately are getting stuck in CI. For example [0] is 
hanging there for
17 hours waiting for a container that would be able to run nmstate 
functional tests.


Can someone please take a look at that?

So your jobs states:

This script required nodes with label: (fc30 || fc31 || fc32 || rhel7 || 
rhel8)


but the only fc30 node we have is down right now:

https://jenkins.ovirt.org/label/fc30/
https://jenkins.ovirt.org/computer/vm0148.workers-phx.ovirt.org/

It's better to post things like these to +infra.

Regards, Marcin



Thanks,
Ales

[0] https://jenkins.ovirt.org/job/vdsm_standard-check-patch/25066/ 


--

Ales Musil

Software Engineer - RHV Network

Red Hat EMEA 

amu...@redhat.com  IM: amusil




___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DWG2KAZIVYFLBHHUDFI4LBYSAML3W25T/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YIFG3CC47Z4IZKCROXWL4GZPKIREGNII/


[ovirt-devel] Re: Branching out 4.3 in ovirt-system-tests

2020-11-23 Thread Marcin Sobczyk



On 11/23/20 4:24 PM, Martin Perina wrote:



On Mon, Nov 23, 2020 at 11:20 AM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:




On 11/20/20 12:15 PM, Martin Perina wrote:
>
>
> On Fri, Nov 20, 2020 at 12:02 PM Marcin Sobczyk
mailto:msobc...@redhat.com>
> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
>
>
>
>     On 11/20/20 11:50 AM, Martin Perina wrote:
>     >
>     >
>     > On Fri, Nov 20, 2020 at 11:46 AM Marcin Sobczyk
>     mailto:msobc...@redhat.com>
<mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
>     > <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>
<mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>> wrote:
>     >
>     >     Posted https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>
>     <https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>>
>     >     <https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>
>     <https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>>> to remove all 4.3 suites.
>     >     Please review.
>     >
>     >
>     > Could you please remove network suite 4.3 from that patch?
Network
>     > team would like to keep to verify 4.3 hosts functionality.
And they
>     > have already removed 4.3 execution from check-patch
>     > https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>
>     <https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>>
>     > <https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>
>     <https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>>>
>     > The plan is to execute 4.3 network suite only nightly
>     Done. If there is still a need to run a 4.3 suite nightly,
I'd be
>     pushing towards
>     creating a 4.3 branch and removing the code from master though.
>
>
> Definitely, I've just created ovirt-engine-4.3 from current
master, so
> we have current status saved and your removal patch can be
merged to
> master
It seems that the tip of the branch points to a commit that already
includes Eitan's change that removed 'network_suite_4.3' from
'stdci.yaml' [6],
so we can't run this pipeline in CI.
Could you please change the branch to point to the commit just before
that? [7]


Readded: https://gerrit.ovirt.org/#/c/112324/ 
<https://gerrit.ovirt.org/#/c/112324/>

Thanks. The CI is not running yet though.
Tried a manual runner: 
https://jenkins.ovirt.org/job/standard-manual-runner/1619/parameters/

but it doesn't see the pipeline.

Ehud, could you please take a look at it?



Regards, Marcin

[6] https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>
[7] a2a462ef2f409ce5ef293adef6ecbb0c86c9d4a7

>
>     >
>     >
>     >     On 10/12/20 6:28 PM, Michal Skrivanek wrote:
>     >     >> On 12 Oct 2020, at 14:49, Marcin Sobczyk
>     mailto:msobc...@redhat.com>
<mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>
>     >     <mailto:msobc...@redhat.com
<mailto:msobc...@redhat.com> <mailto:msobc...@redhat.com
<mailto:msobc...@redhat.com>>>>
>     wrote:
>     >     >>
>     >     >> Hi all,
>     >     >>
>     >     >> after minimizing the usage of lago in basic suite,
>     >     >> and some minor adjustments in the network suite, we are
>     finally
>     >     >> able to remove lago OST plugin as a dependency [1].
>     >     >>
>     >     >> This however comes with a price of keeping lots of ugly
>     >     ifology, i.e. [2][3].
>     >     >> There's big disparity between OST runs we have on
el7 and
>     el8.
>     >     >> There's also tons of symlink-based code sharing between
>     suites
>     >     - be it 4.3
>     >     >> suites and master suites or simply different types
of suites.
>     >     >> The basic suite has its own 'test_utils', which is
>     copied/symlinked
>     >     >> in multiple places. There's also 'ost_utils', which
is really
>     >     messy ATM.

[ovirt-devel] "No module named pathlib" errors in OST

2020-11-23 Thread Marcin Sobczyk

Hi,

there was a new version of 'importlib_metadata' library pushed to pypi.
Even though it's compatible only with py3, it's picked up by py2 pip.
I posted a patch to fix the version to the previous one:

https://gerrit.ovirt.org/#/c/112325/

If you see this failure in other suites than basic please notify me and 
I will align the patch.


Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/L57ADVB3PT6KTVXYCTUKMQ3OENXFN2VG/


[ovirt-devel] Re: Branching out 4.3 in ovirt-system-tests

2020-11-23 Thread Marcin Sobczyk



On 11/20/20 12:15 PM, Martin Perina wrote:



On Fri, Nov 20, 2020 at 12:02 PM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:




On 11/20/20 11:50 AM, Martin Perina wrote:
>
>
> On Fri, Nov 20, 2020 at 11:46 AM Marcin Sobczyk
mailto:msobc...@redhat.com>
> <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>> wrote:
>
>     Posted https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>
>     <https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/>> to remove all 4.3 suites.
>     Please review.
>
>
> Could you please remove network suite 4.3 from that patch? Network
> team would like to keep to verify 4.3 hosts functionality. And they
> have already removed 4.3 execution from check-patch
> https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>
> <https://gerrit.ovirt.org/#/c/112291/
<https://gerrit.ovirt.org/#/c/112291/>>
> The plan is to execute 4.3 network suite only nightly
Done. If there is still a need to run a 4.3 suite nightly, I'd be
pushing towards
creating a 4.3 branch and removing the code from master though.


Definitely, I've just created ovirt-engine-4.3 from current master, so 
we have current status saved and your removal patch can be merged to 
master

It seems that the tip of the branch points to a commit that already
includes Eitan's change that removed 'network_suite_4.3' from 
'stdci.yaml' [6],

so we can't run this pipeline in CI.
Could you please change the branch to point to the commit just before 
that? [7]


Regards, Marcin

[6] https://gerrit.ovirt.org/#/c/112291/
[7] a2a462ef2f409ce5ef293adef6ecbb0c86c9d4a7



    >
>
>     On 10/12/20 6:28 PM, Michal Skrivanek wrote:
>     >> On 12 Oct 2020, at 14:49, Marcin Sobczyk
mailto:msobc...@redhat.com>
>     <mailto:msobc...@redhat.com <mailto:msobc...@redhat.com>>>
wrote:
>     >>
>     >> Hi all,
>     >>
>     >> after minimizing the usage of lago in basic suite,
>     >> and some minor adjustments in the network suite, we are
finally
>     >> able to remove lago OST plugin as a dependency [1].
>     >>
>     >> This however comes with a price of keeping lots of ugly
>     ifology, i.e. [2][3].
>     >> There's big disparity between OST runs we have on el7 and
el8.
>     >> There's also tons of symlink-based code sharing between
suites
>     - be it 4.3
>     >> suites and master suites or simply different types of suites.
>     >> The basic suite has its own 'test_utils', which is
copied/symlinked
>     >> in multiple places. There's also 'ost_utils', which is really
>     messy ATM.
>     >> It's very hard to keep track and maintain all of this...
>     >>
>     >> At this moment, we are able to run basic suite and
network suite
>     >> on el8, with prebuilt ost-images and without lago plugin.
>     >> HE suites should be the next step. We have patches that
make them
>     >> py3-compatible that probably still need some attention
[4][5].
>     >> We don't have any prebuilt HE ost-images, but this will
be handled
>     >> in the nearest future.
>     >>
>     >> I think it's good time to detach ourselves from the
legacy stuff
>     >> and start with a clean slate. My proposition would be to
branch
>     >> out 4.3 in ovirt-system-tests and not use py2/el7 in the
master
>     >> branch at all. This would allow us to focus on py3, el8 and
>     ost-images
>     >> efforts while keeping the legacy stuff intact.
>     >>
>     >> WDYT?
>     > Great. We don’t really need 4.3 that much anymore.
>     >
>     >> Regards, Marcin
>     >>
>     >> [1] https://gerrit.ovirt.org/#/c/111643/
<https://gerrit.ovirt.org/#/c/111643/>
>     <https://gerrit.ovirt.org/#/c/111643/
<https://gerrit.ovirt.org/#/c/111643/>>
>     >> [2]
>
https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
<https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh>
>   
 <https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
<https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh>>
>     >> [3]
>

https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master

[ovirt-devel] Re: Branching out 4.3 in ovirt-system-tests

2020-11-20 Thread Marcin Sobczyk



On 11/20/20 11:50 AM, Martin Perina wrote:



On Fri, Nov 20, 2020 at 11:46 AM Marcin Sobczyk <mailto:msobc...@redhat.com>> wrote:


Posted https://gerrit.ovirt.org/#/c/112302/
<https://gerrit.ovirt.org/#/c/112302/> to remove all 4.3 suites.
Please review.


Could you please remove network suite 4.3 from that patch? Network 
team would like to keep to verify 4.3 hosts functionality. And they 
have already removed 4.3 execution from check-patch 
https://gerrit.ovirt.org/#/c/112291/ 
<https://gerrit.ovirt.org/#/c/112291/>

The plan is to execute 4.3 network suite only nightly
Done. If there is still a need to run a 4.3 suite nightly, I'd be 
pushing towards

creating a 4.3 branch and removing the code from master though.




On 10/12/20 6:28 PM, Michal Skrivanek wrote:
>> On 12 Oct 2020, at 14:49, Marcin Sobczyk mailto:msobc...@redhat.com>> wrote:
>>
>> Hi all,
>>
>> after minimizing the usage of lago in basic suite,
>> and some minor adjustments in the network suite, we are finally
>> able to remove lago OST plugin as a dependency [1].
>>
>> This however comes with a price of keeping lots of ugly
ifology, i.e. [2][3].
>> There's big disparity between OST runs we have on el7 and el8.
>> There's also tons of symlink-based code sharing between suites
- be it 4.3
>> suites and master suites or simply different types of suites.
>> The basic suite has its own 'test_utils', which is copied/symlinked
>> in multiple places. There's also 'ost_utils', which is really
messy ATM.
>> It's very hard to keep track and maintain all of this...
>>
>> At this moment, we are able to run basic suite and network suite
>> on el8, with prebuilt ost-images and without lago plugin.
>> HE suites should be the next step. We have patches that make them
>> py3-compatible that probably still need some attention [4][5].
>> We don't have any prebuilt HE ost-images, but this will be handled
>> in the nearest future.
>>
>> I think it's good time to detach ourselves from the legacy stuff
>> and start with a clean slate. My proposition would be to branch
>> out 4.3 in ovirt-system-tests and not use py2/el7 in the master
>> branch at all. This would allow us to focus on py3, el8 and
ost-images
>> efforts while keeping the legacy stuff intact.
>>
>> WDYT?
> Great. We don’t really need 4.3 that much anymore.
>
>> Regards, Marcin
>>
>> [1] https://gerrit.ovirt.org/#/c/111643/
<https://gerrit.ovirt.org/#/c/111643/>
>> [2]
https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
<https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh>
>> [3]

https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/test-scenarios/conftest.py

<https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/test-scenarios/conftest.py>
>> [4] https://gerrit.ovirt.org/108809
<https://gerrit.ovirt.org/108809>
>> [5] https://gerrit.ovirt.org/110097
<https://gerrit.ovirt.org/110097>
>>



--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SMVY2RASJKDWATMVCCTYE3KRJN4E6MOL/


[ovirt-devel] Re: [OST] Testing master suite on using python 2.7?!

2020-11-19 Thread Marcin Sobczyk



On 11/19/20 1:28 PM, Nir Soffer wrote:

On Wed, Nov 18, 2020 at 6:40 PM Nir Soffer  wrote:

I'm trying to add a test module for image transfer:
https://gerrit.ovirt.org/c/112274/

The test use concurrent.futures module form the standard library.
This module is not available in python 2.7 but we don't support 2.7 in master
and it was EOL since Jan 2020.

The test fail when starting the suite:

[2020-11-18T16:17:10.526Z] = test session
starts ==
[2020-11-18T16:17:10.526Z] platform linux2 -- Python 2.7.5,
pytest-4.6.9, py-1.9.0, pluggy-0.13.1 -- /usr/bin/python2
...
[2020-11-18T16:17:11.105Z]  ERRORS


[2020-11-18T16:17:11.105Z] ___ ERROR collecting
basic-suite-master/test-scenarios/008_image_transfer.py ___
[2020-11-18T16:17:11.105Z] ImportError while importing test module
'/home/jenkins/agent/workspace/ovirt-system-tests_standard-check-patch/ovirt-system-tests/basic-suite-master/test-scenarios/008_image_transfer.py'
[2020-11-18T16:17:11.105Z] Hint: make sure your test modules/packages
have valid Python names.
[2020-11-18T16:17:11.105Z] Traceback:
[2020-11-18T16:17:11.105Z]
../basic-suite-master/test-scenarios/008_image_transfer.py:22: in

[2020-11-18T16:17:11.105Z] import concurrent.futures
[2020-11-18T16:17:11.105Z] E   ImportError: No module named concurrent.futures

Should we use pytest.skip() to skip this test when running on python 2?
Or just remove the python 2 build, I don't have any idea why we run master code
python 2.

You're asking the wrong question here.
The right one is why we still don't have broad availability of el8 
agents in CI?

+Anton +Michal


I added skip for python 2, but even with python 3 job, we need the
ovirt-imageio-client
package, available only on python 3.
Add the required package here 
https://github.com/oVirt/ovirt-system-tests/blob/master/automation/basic_suite_master.packages.el8

It will be installed only for el8.



We need to change ci to use different builds for python 2 and 3, or
drop the python 2
builds from the master suite.

In the current state we cannot test image transfer in the CI, only
locally. Using local
OST is easy and reliable. even with nested setup.

I think we can solve this with markers - instead of collecting all the tests,
we can use:

 @pytest.mark.ci
 def test_that_works_in_ci():
 ...

 def test_that_does_not_work_in_ci():
 ...

The ci job can run:

  pytest -m "ci"

So it picks only tests that can run in the ci environment.

What do you think?


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FYM5MNBV5Q2LQQJPLFYKRIXRMJFDTWZ6/


[ovirt-devel] expect OST failures on el8 for the next couple of hours

2020-11-16 Thread Marcin Sobczyk

Hi all,

while working on cleanup procedures for ost-images I accidentally removed
the RPMs from the templates server. Not my day... I'm rebuilding them 
right now
and will republish them ASAP, but you if you try running el8-based 
basic/networking

suites you will probably end up with a failure.

Hopefully when the cleanup patch is ready I will never have to do this 
manually again

and will avoid mistakes like this in the future.

Very sorry for the inconvenience.

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/6YGSKE7BZXKEYZKAZXYMYAIVLRJDCCPD/


[ovirt-devel] vdsm CI is failing

2020-11-16 Thread Marcin Sobczyk

Hi,

vdsm CI is failing with:

[2020-11-16T10:51:40.132Z] No match for argument: ovirt-imageio-client-2.1.1
[2020-11-16T10:51:40.132Z] Error: Unable to find a match: 
ovirt-imageio-client-2.1.1


example run: 
https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/24926/pipeline


could you please take a look at it?

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2HSTTH3XIH2DR4DMS2NCFZ7MOE2TJHI3/


[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-06 Thread Marcin Sobczyk



On 11/5/20 2:03 PM, Yedidyah Bar David wrote:

On Thu, Nov 5, 2020 at 2:40 PM Marcin Sobczyk  wrote:



On 11/5/20 1:22 PM, Vojtech Juranek wrote:

IMO OST should be made easy to interact with from your main development
machine.

TBH I didn't see much interest in running OST on developers' machines.

as it's little bit complex to setup?

Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
I hope it will be the long term solution for this problem.


Making it more easy maybe would increase
number of people contributing to OST ...

But that's a chicken and egg problem - who else is going to contribute
to OST if not us?
If we want the setup to be easier, then let's work on it.

I agree, but this requires time.

Most of the work done on OST so far was for CI. Using it by developers
was a side-interest.


People are mostly using manual OST runs to verify things and that is what
most of the efforts focus on. It's not that I wouldn't like OST to be more
developer-friendly, I definitely would, but we need more manpower
and interest for that to happen.


I noticed that many of you run OST in a VM ending up with three layers
of VMs.
I know it works, but I got multiple reports of assertions' timeouts and
TBH I just don't
see this as a viable solution to work with OST - you need a bare metal
for that.

Why?

After all, we also work on a virtualization product/project. If it's
not good enough for ourselves, how do we expect others to use it? :-)

I'm really cool with the engine and the hosts being VMs, but deploying
engine and the hosts as VMs nested in other VM is what I think is
unreasonable.

I tried this approach in past two days and works fine for me (except the fast
it's slow)

(One of the best freudian slips I noticed recently)


Maybe I'm wrong here, but I don't think our customers run whole oVirt
clusters
inside VMs. There's just too much overhead with all that layers of nesting
and the performance sucks.


Also, using bare-metal isn't always that easy/comfortable either, even
if you have the hardware.

I'm very happy with my servers. What makes working with bm
hard/uncomfortable?

IMHO main issue is lack of HW. I really don't want to run it directly on my
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable

Lack of HW is definitely an issue, but not the only one.

When I did use BM for OST at home, my setup was something like this:

(home router) <-wifi or cable-> (laptop) <-cable-> (switch) <-cable-> BM

I configured on my laptop PXE for reinstallations. Worked quite well even
if not perfect.

I worked quite hard on _not_ using my laptop as a router. I ran on it
squid + squid-rpm-cache, and I think I never managed to make OST work
(through a proxy). Eventually I gave up and configured the laptop as a
router, and this worked.

Before that, I also tried disabling dhcp on the router and running dhcp+dns
PXE on a raspberry pi on the home network, but this was problematic in
other ways (I think the rpi was simply not very stable - had to power-
cycle it every few months).

I can't use the router as PXE.

Hmmm maybe we have a different understanding on "bare metal usage for OST".
What I meant is you should use a physical server, install lago et al on 
it and use it to run OST as usual.
I didn't mean using a separate bare metal for each of the engine and the 
hosts.

That way you don't need to have your own PXE and router - it's taken care
of by lago and prebuilt images of VMs.



I agree. I have the privilege of having separate servers to run OST.
Even though that would work, I can't imagine working with OST
on a daily basis on my laptop.

I wonder why. Not saying I disagree.

OST eats a lot of resources and makes your machine less responsive.
Given that, and the lengthy debugging cycles, with a separate server
I'm able to work on 2-3 things simultaneously.



I find the biggest load on my laptop to be from browsers running
web applications. I think this was ok also with my previous laptop,
so assume I can restrict the browsers (with cgroups or whatever) to
only use a slice (and I usually do not really care about their
performance). Just didn't spend the time on trying this yet.

(Previous laptop was from 2016, 16GB RAM. Current from 2019, 32GB).


That also kinda proves my point that people are not being interested
in running OST on their machines - they don't have machines they could use.
I see three solutions to this:
- people start pushing managers to have their own servers

Not very likely. We are supposed to use VMs for that.


- we will have a machine-renting solution based on beaker
   (with nice, automatic provisioning for OST etc.), so we can
   work on bare metals

+1 from me.


- we focus on the CI and live with the "launch and prey" philosophy :)

If it's fast enough and fully automatic (e.g. using something like Nir's
script, but per

[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-06 Thread Marcin Sobczyk

Hi,

On 11/5/20 6:16 PM, Sandro Bonazzola wrote:
Would you consider presenting the flow of setting up the whole thing 
and running a test starting with a minimal CentOS 8 host?
I would be happy to get it premiered on oVirt youtube channel and kept 
there for future references.
This will enable more people willing to contribute to test their 
changes in local OST.

I'd be very happy to, but I don't think it's the right time yet.
While the setup process should be smooth now, we have two competing ways 
of running suites.
One is the 'run_suite.sh' script, which is bloated and we're trying to 
move away from it somehow,
and the other is 'lagofy.sh', which is much more modern and lightweight, 
but still needs some polishing.
I would like to spend some more time on 'lagofy.sh' to make the user 
experience better and only then

make the tutorial.

Regards, Marcin



Il giorno mar 3 nov 2020 alle ore 14:22 Marcin Sobczyk 
mailto:msobc...@redhat.com>> ha scritto:


Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Regards, Marcin

[1] https://gerrit.ovirt.org/#/c/111749/
<https://gerrit.ovirt.org/#/c/111749/>
___
Devel mailing list -- devel@ovirt.org <mailto:devel@ovirt.org>
To unsubscribe send an email to devel-le...@ovirt.org
<mailto:devel-le...@ovirt.org>
Privacy Statement: https://www.ovirt.org/privacy-policy.html
<https://www.ovirt.org/privacy-policy.html>
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
<https://www.ovirt.org/community/about/community-guidelines/>
List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34YVHSMQVQS4UPDUOKCQM/

<https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34YVHSMQVQS4UPDUOKCQM/>



--

Sandro Bonazzola

MANAGER, SOFTWARE ENGINEERING, EMEA R RHV

Red Hat EMEA <https://www.redhat.com/>

sbona...@redhat.com <mailto:sbona...@redhat.com>

<https://www.redhat.com/> 

*Red Hat respects your work life balance. Therefore there is no need 
to answer this email out of your office hours.

<https://mojo.redhat.com/docs/DOC-1199578>*
*

*

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/BGUILAMFZCV4QCBTOVIB5AP2ZXYBDMR3/


[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Marcin Sobczyk



On 11/5/20 1:22 PM, Vojtech Juranek wrote:

IMO OST should be made easy to interact with from your main development
machine.

TBH I didn't see much interest in running OST on developers' machines.

as it's little bit complex to setup?

Definitely, that's why the playbook and 'setup_for_ost.sh' was created.
I hope it will be the long term solution for this problem.


Making it more easy maybe would increase
number of people contributing to OST ...
But that's a chicken and egg problem - who else is going to contribute 
to OST if not us?

If we want the setup to be easier, then let's work on it.




People are mostly using manual OST runs to verify things and that is what
most of the efforts focus on. It's not that I wouldn't like OST to be more
developer-friendly, I definitely would, but we need more manpower
and interest for that to happen.


I noticed that many of you run OST in a VM ending up with three layers
of VMs.
I know it works, but I got multiple reports of assertions' timeouts and
TBH I just don't
see this as a viable solution to work with OST - you need a bare metal
for that.

Why?

After all, we also work on a virtualization product/project. If it's
not good enough for ourselves, how do we expect others to use it? :-)

I'm really cool with the engine and the hosts being VMs, but deploying
engine and the hosts as VMs nested in other VM is what I think is
unreasonable.

I tried this approach in past two days and works fine for me (except the fast
it's slow)


Maybe I'm wrong here, but I don't think our customers run whole oVirt
clusters
inside VMs. There's just too much overhead with all that layers of nesting
and the performance sucks.


Also, using bare-metal isn't always that easy/comfortable either, even
if you have the hardware.

I'm very happy with my servers. What makes working with bm
hard/uncomfortable?

IMHO main issue is lack of HW. I really don't want to run it directly on my
dev laptop (without running it inside VM). If I ssh/mount FS/tunnel ports to/
from VM or some bare metal server really doesn't matter (assuming reasonable

I agree. I have the privilege of having separate servers to run OST.
Even though that would work, I can't imagine working with OST
on a daily basis on my laptop.

That also kinda proves my point that people are not being interested
in running OST on their machines - they don't have machines they could use.
I see three solutions to this:
- people start pushing managers to have their own servers
- we will have a machine-renting solution based on beaker
 (with nice, automatic provisioning for OST etc.), so we can
 work on bare metals
- we focus on the CI and live with the "launch and prey" philosophy :)


connection speed to bare metal server)


I can think of reprovisioning, but that is not needed for OST usage.


CI also uses VMs for this, IIUC. Or did we move there to containers?
Perhaps we should invest in making this work well inside a container.

CI doesn't use VMs - it uses a mix of containers and bare metals.
The solution for containers can't handle el8 and that's why we're
stuck with running OST on el7 mostly (apart from the aforementioned
bare metals, which use el8).

There is a 'run-ost-container.sh' script in the project. I think some people
had luck using it, but I never even tried. Again, my personal opinion, as
much as I find containers useful and convenient in different situations,
this is not one of them - you should be using bare metal.

The "backend for OST" is a subject for a whole, new discussion.
My opinion here is that we should be using oVirt as backend for OST
(as in running oVirt cluster as VMs in oVirt). I'm a big fan of the
dogfooding
concept. This of course creates a set of new problems like "how can
developers
work with this", "where do we get the hosting oVirt cluster from" etc.
Whooole, new discussion :)

Regards, Marcin


On my bare metal server OST basic run takes 30 mins to complete. This is
something one
can work with, but we can do even better.

Thank you for your input and I hope that we can have more people
involved in OST
on a regular basis and not once-per-year hackathons. This is a complex
project, but it's
really useful.

+1!

Thanks and best regards,


Nice.

Thanks and best regards,

[1]
https://github.com/lago-project/lago/blob/7bf288ad53da3f1b86c08b3283ee9c5
118e7605e/lago/providers/libvirt/network.py#L162 [2]
https://github.com/oVirt/ovirt-system-tests/blob/6d5c2a0f9fb3c05afc854712
60065786b5fdc729/ost_utils/ost_utils/pytest/fixtures/engine.py#L105

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4UTS3UIZ37WIBNVZQUZOS7MMASWQRVLK/


[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-05 Thread Marcin Sobczyk



On 11/5/20 11:30 AM, Milan Zamazal wrote:

Marcin Sobczyk  writes:


On 11/4/20 11:29 AM, Yedidyah Bar David wrote:

Perhaps what you want, some day, is for the individual tests to have make-style 
dependencies? So you'll issue just a single test, and OST will only
run the bare minimum for running it.

Yeah, I had the same idea. It's not easy to implement it though.
'pytest' has a "tests are independent" design, so we would need to
build something on top of that (or try to invent our own test
framework, which is a very bad idea). But even with a
dependency-resolving solution, there are tests that set something up
just to bring it down in a moment (by design), so we'd probably need
some kind of "provides" and "tears down" markers.  Then you have the
fact that some things take a lot of time and we do other stuff in
between, while waiting - dependency resolving could force things to
happen linearly and the run times could skyrocket... It's a complex
subject that requires a serious think-through.

Actually I was once thinking about introducing test dependencies in
order to run independent tests in parallel and to speed up OST runs this
way.  The idea was that OST just waits on something at many places and
it could run other tests in the meantime (we do some test interleaving
in extreme cases but it's suboptimal and difficult to maintain).

Yeah, I think I remember you did that during one of OST's hackathons.



When arranging some things manually, I could achieve a significant
speedup.  But the problem is, of course, how to make an automated
dependency management and handle all the possible situations and corner
cases.  It would be quite a lot of work, I think.

Exactly. I.e. I can see there's [1], but of course that will work only 
on py3.

The dependency management is something we'd have to implement and maintain
on our own probably.

Then of course we'd be introducing the test repeatability
problem, since ordering of things for different runs might be different,
which in current state of OST is something I'd like to avoid.

[1] https://pypi.org/project/pytest-asyncio/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/QLPDB4VI2P3WZ7SHQMWUS6LSMA5SVAA7/


[ovirt-devel] Re: Testing image transfer and backup with OST environment

2020-11-05 Thread Marcin Sobczyk



On 11/5/20 9:09 AM, Yedidyah Bar David wrote:

On Wed, Nov 4, 2020 at 9:49 PM Nir Soffer  wrote:

I want to share useful info from the OST hackathon we had this week.

Image transfer must work with real hostnames to allow server
certificate verification.
Inside the OST environment, engine and hosts names are resolvable, but
on the host
(or vm) running OST, the names are not available.
Do we really need this? Can't we execute those image transfers on the 
host VMs instead?




This can be fixed by adding the engine and hosts to /etc/hosts like this:

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.2 engine
192.168.200.3 lago-basic-suite-master-host-0
192.168.200.4 lago-basic-suite-master-host-1
Modifying '/etc/hosts' requires root privileges - it will work in mock, 
but nowhere else and IMO is a bad idea.



Are these addresses guaranteed to be static?

Where are they defined?
No, they're not and I think it's a good thing - if we end up assigning 
them statically
sooner or later we will stumble upon "this always worked because we 
always used the

same ip addresses" bugs.

Libvirt runs 'dnsmasq' that the VMs used. The XML definition for DNS is 
done by lago [1].





It would be if this was automated by OST. You can get the details using:

$ cd src/ovirt-system-tests/deployment-xxx
$ lago status

It would have been even nicer if it was possible/easy to have this working
dynamically without user intervention.

Thought about and searched for ways to achieve this, failed to find something
simple.

Closest options I found, in case someone feels like playing with this:

1. Use HOSTALIASES. 'man 7 hostname' for details, or e.g.:

https://blog.tremily.us/posts/HOSTALIASES/

With this, if indeed the addresses are static, but you do not want to have
them hardcoded in /etc (say, because you want different ones per different
runs/needs/whatever), you can add them hardcoded there with some longer
name, and have a process-specific HOSTALIASES file mapping e.g. 'engine'
to the engine of this specific run.

'HOSTALIASES' is really awesome, but it doesn't always work.
I found that for machines connected to VPNs, the overriden DNS
servers will have the priority in responding to name resolution
and 'HOSTALIASES' definitions won't have effect.



2. https://github.com/fritzw/ld-preload-open

With this, you can have a process-specific /etc/resolv.conf, pointing
this specific process to the internal nameserver inside lago/OST.
This requires building this small C library. Didn't try it or check
its code. Also can't find it pre-built in copr (or anywhere).

(
Along the way, if you like such tricks, found this:

https://github.com/gaul/awesome-ld-preload
)

This sounds really complex. As mentioned before I would prefer
if things could be done on the host VMs instead.


OST keeps the deployment directory in the source directory. Be careful if you
like to "git clean -dxf' since it will delete all the deployment and
you will have to
kill the vms manually later.

This is true, but there are reasons behind that - the way mock works
and libvirt permissions that are needed to operate on VMs images.



The next thing we need is the engine ca cert. It can be fetched like this:

$ curl -k 
'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'

ca.pem

I would expect OST to do this and put the file in the deployment directory.

We have that already [2].



To upload or download images, backup vms or use other modern examples from
the sdk, you need to have a configuration file like this:

$ cat ~/.config/ovirt.conf
[engine]
engine_url = https://engine
username = admin@internal
password = 123
cafile = ca.pem

With this uploading from the same directory where ca.pem is located
will work. If you want
it to work from any directory, use absolute path to the file.

I created a test image using qemu-img and qemu-io:

$ qemu-img create -f qcow2 test.qcow2 1g

To write some data to the test image we can use qemu-io. This writes 64k of data
(b"\xf0" * 64 * 1024) to offset 1 MiB.

$ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2

Never heard about qemu-io. Nice to know. Seems like it does not have a manpage,
in el8, although I can find such a manpage elsewhere on the net.


Since this image contains only 64k of data, uploading it should be instant.

The last part we need is the imageio client package:

$ dnf install ovirt-imageio-client

To upload the image, we need at least one host up and storage domains
created. I did not find a way to prepare OST, so simply run this after
run_tests completed. It took about an hour.

To upload the image to raw sparse disk we can use:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
-c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
[   0.0 ] Checking image...
[   0.0 ] Image format: qcow2
[   

[ovirt-devel] Re: Building lago from source

2020-11-05 Thread Marcin Sobczyk



On 11/4/20 11:05 PM, Nir Soffer wrote:

I'm trying to test with ost:
https://github.com/lago-project/lago/pull/815

So clone the project on the ost vm and built rpms:

make
make rpm

The result is:
lago-1.0.2-1.el8.noarch.rpm  python3-lago-1.0.2-1.el8.noarch.rpm

But the lago version installed by setup_for_ost.sh is:
$ rpm -q lago
lago-1.0.11-1.el8.noarch

I tried to install lago from master, and then lago_init fail:

$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
usage: lago [-h] [-l {info,debug,error,warning}] [--logdepth LOGDEPTH]
 [--version] [--out-format {default,flat,json,yaml}]
 [--prefix-path PREFIX_PATH] [--workdir-path WORKDIR_PATH]
 [--prefix-name PREFIX_NAME] [--ssh-user SSH_USER]
 [--ssh-password SSH_PASSWORD] [--ssh-tries SSH_TRIES]
 [--ssh-timeout SSH_TIMEOUT] [--libvirt_url LIBVIRT_URL]
 [--libvirt-user LIBVIRT_USER]
 [--libvirt-password LIBVIRT_PASSWORD]
 [--default_vm_type DEFAULT_VM_TYPE]
 [--default_vm_provider DEFAULT_VM_PROVIDER]
 [--default_root_password DEFAULT_ROOT_PASSWORD]
 [--lease_dir LEASE_DIR] [--reposync-dir REPOSYNC_DIR]
 [--ignore-warnings]
 VERB ...
lago: error: unrecognized arguments: --ssh-key
/home/nsoffer/src/ovirt-system-tests/deployment-basic-suite-master
/home/nsoffer/src/ovirt-system-tests/basic-suite-master/LagoInitFile

Do we use a customized lago version for ost? Where is the source?

Yes, you can find the source RPM in my copr repo [1].
The reason we use this one is that CI for lago is broken and cannot
be fixed/moved to STDCI easily, so we can't merge anything in github.



Nir



[1] https://copr.fedorainfracloud.org/coprs/tinez/ost-stuff/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/Y7PHKCNMFWDUUNBAH5FFSEP7OXGCSDJS/


[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Marcin Sobczyk



On 11/4/20 11:29 AM, Yedidyah Bar David wrote:

On Wed, Nov 4, 2020 at 12:18 PM Marcin Sobczyk  wrote:



On 11/3/20 7:21 PM, Nir Soffer wrote:

On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk  wrote:

Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Awesome, thanks!

So setup_for_ost.sh finished successfully (after more than an hour),
but now I see conflicting documentation and comments about how to
run test suites and how to cleanup after the run.

The docs say:
https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html

  ./run_suite.sh basic-suite-4.0

But I see other undocumented ways in recent threads:

  run_tests

Trying the run_test option, from recent Mail:


. lagofy.sh
lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k 
/usr/share/ost-images/el8_id_rsa

This fails:

$ . lagofy.sh
Suite basic-suite-master - lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Add your group to qemu's group: "usermod -a -G qemu nsoffer"

setup_for_ost.sh should handle this, no?

It does:
https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b03b07f05f358e/common/setup/setup_playbook.yml#L95
Maybe you didn't relog so the group inclusion would be effective?
But I agree there should be a message printed to the user if relogging
is necessary - I will write a patch for it.


[nsoffer@ost ovirt-system-tests]$ lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
@ Initialize and populate prefix:
# Initialize prefix:
  * Create prefix dirs:
  * Create prefix dirs: Success (in 0:00:00)
  * Generate prefix uuid:
  * Generate prefix uuid: Success (in 0:00:00)
  * Copying ssh key:
  * Copying ssh key: Success (in 0:00:00)
  * Tag prefix as initialized:
  * Tag prefix as initialized: Success (in 0:00:00)
# Initialize prefix: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
  * Create disk nfs:
  * Create disk nfs: Success (in 0:00:00)
  * Create disk iscsi:
  * Create disk iscsi: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-0:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-1:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00)
# Copying any deploy scripts:
# Copying any deploy scripts: Success (in 0:00:00)
# calling yaml.load() without Loader=... is deprecated, as the
default Loader is unsafe. Please read https://msg.pyyaml.org/load for
full details.
# Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:00:01)
Error occured, aborting
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main
  cli_plugins[args.verb].do_run(args)
File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line
186, in do_run
  self._do_run(**vars(args))
File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init
  ssh_key=ssh_key,
File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143,
in virt_conf_from_stream
  ssh_key=ssh_key
File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269,
in virt_conf
  net_specs=conf['nets'],
File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__
  self._nets[na

[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Marcin Sobczyk



On 11/3/20 7:21 PM, Nir Soffer wrote:

On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk  wrote:

Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Awesome, thanks!

So setup_for_ost.sh finished successfully (after more than an hour),
but now I see conflicting documentation and comments about how to
run test suites and how to cleanup after the run.

The docs say:
https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html

 ./run_suite.sh basic-suite-4.0

But I see other undocumented ways in recent threads:

 run_tests

Trying the run_test option, from recent Mail:


. lagofy.sh
lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k 
/usr/share/ost-images/el8_id_rsa

This fails:

$ . lagofy.sh
Suite basic-suite-master - lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Add your group to qemu's group: "usermod -a -G qemu nsoffer"

setup_for_ost.sh should handle this, no?
It does: 
https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b03b07f05f358e/common/setup/setup_playbook.yml#L95

Maybe you didn't relog so the group inclusion would be effective?
But I agree there should be a message printed to the user if relogging 
is necessary - I will write a patch for it.




[nsoffer@ost ovirt-system-tests]$ lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
@ Initialize and populate prefix:
   # Initialize prefix:
 * Create prefix dirs:
 * Create prefix dirs: Success (in 0:00:00)
 * Generate prefix uuid:
 * Generate prefix uuid: Success (in 0:00:00)
 * Copying ssh key:
 * Copying ssh key: Success (in 0:00:00)
 * Tag prefix as initialized:
 * Tag prefix as initialized: Success (in 0:00:00)
   # Initialize prefix: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-engine:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
 * Create disk nfs:
 * Create disk nfs: Success (in 0:00:00)
 * Create disk iscsi:
 * Create disk iscsi: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-0:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-1:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00)
   # Copying any deploy scripts:
   # Copying any deploy scripts: Success (in 0:00:00)
   # calling yaml.load() without Loader=... is deprecated, as the
default Loader is unsafe. Please read https://msg.pyyaml.org/load for
full details.
   # Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:00:01)
Error occured, aborting
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main
 cli_plugins[args.verb].do_run(args)
   File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line
186, in do_run
 self._do_run(**vars(args))
   File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init
 ssh_key=ssh_key,
   File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143,
in virt_conf_from_stream
 ssh_key=ssh_key
   File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269,
in virt_conf
 net_specs=conf['nets'],
   File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__
 self._nets[name] = self._create_net(spec, compat)
   File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in 
_create_net
 return c

[ovirt-devel] How to set up a (rh)el8 machine for running OST

2020-11-03 Thread Marcin Sobczyk
Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Regards, Marcin

[1] https://gerrit.ovirt.org/#/c/111749/
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/N2V2OWSUTQS34YVHSMQVQS4UPDUOKCQM/


[ovirt-devel] Branching out 4.3 in ovirt-system-tests

2020-10-12 Thread Marcin Sobczyk
Hi all,

after minimizing the usage of lago in basic suite,
and some minor adjustments in the network suite, we are finally
able to remove lago OST plugin as a dependency [1].

This however comes with a price of keeping lots of ugly ifology, i.e. [2][3].
There's big disparity between OST runs we have on el7 and el8.
There's also tons of symlink-based code sharing between suites - be it 4.3
suites and master suites or simply different types of suites.
The basic suite has its own 'test_utils', which is copied/symlinked
in multiple places. There's also 'ost_utils', which is really messy ATM.
It's very hard to keep track and maintain all of this...

At this moment, we are able to run basic suite and network suite
on el8, with prebuilt ost-images and without lago plugin.
HE suites should be the next step. We have patches that make them
py3-compatible that probably still need some attention [4][5].
We don't have any prebuilt HE ost-images, but this will be handled
in the nearest future.

I think it's good time to detach ourselves from the legacy stuff
and start with a clean slate. My proposition would be to branch
out 4.3 in ovirt-system-tests and not use py2/el7 in the master
branch at all. This would allow us to focus on py3, el8 and ost-images
efforts while keeping the legacy stuff intact.

WDYT?

Regards, Marcin

[1] https://gerrit.ovirt.org/#/c/111643/
[2] https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/control.sh
[3] 
https://gerrit.ovirt.org/#/c/111643/6/basic-suite-master/test-scenarios/conftest.py
[4] https://gerrit.ovirt.org/108809
[5] https://gerrit.ovirt.org/110097
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WHWK3P6F5VBFHFMBGWB7EWUMDCO5C5DK/


[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-24 Thread Marcin Sobczyk



On 9/24/20 9:44 AM, Martin Perina wrote:



On Thu, Sep 24, 2020 at 8:26 AM Yedidyah Bar David > wrote:


On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek
mailto:vjura...@redhat.com>> wrote:
>
> Hi,
> can anybody look on OST, it fails constantly with error bellow.
> See e.g. [1, 2] for full logs.
> Thanks
> Vojta
>
> [1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
> [2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/
>
> 13:07:16 ../basic-suite-master/test-scenarios/
> 002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]:
Invalid
> characters were found in group names but not replaced, use
> 13:07:22 - to see details

I think this warning is unrelated, it's coming from here:

https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/consoleText


../basic-suite-master/test-scenarios/001_initialize_engine_pytest.py::test_check_ansible_connectivity
[WARNING]: Invalid characters were found in group names but not
replaced, use
- to see details

Yeah, these warnings are completely unrelated and harmless (although 
pretty ugly)




Perhaps it's due to:

ost_utils/ost_utils/pytest/fixtures/ansible.py

ANSIBLE_ENGINE_PATTERN = "~lago-.*-engine"
ANSIBLE_HOSTS_PATTERN = "~lago-.*-host-[0-9]"
ANSIBLE_HOST0_PATTERN = "~lago-.*-host-0"
ANSIBLE_HOST1_PATTERN = "~lago-.*-host-1"

?

Perhaps this can help understand:

https://gerrit.ovirt.org/111433


Adding Marcin ...
No, it's for a different reason - it's about how lago creates ansible 
inventory.
This is fixed in py3-based lago, so you won't see these errors in el8 
OST runs, but still visible on el7 runs.

The fix for this is here: https://github.com/lago-project/lago/pull/814

Overall the ansible output in OST should be improved because it's much 
too noisy.

I'll take care of it once I get rid of lago dependencies in basic suite.




Best regards,

> 13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4)
doesn't match
> a supported version!
> 13:07:22   RequestsDependencyWarning)
> 13:07:22 lago-basic-suite-master-engine | CHANGED => {
> 13:07:22     "changed": true,
> 13:07:22     "gid": 0,
> 13:07:22     "group": "root",
> 13:07:22     "mode": "0755",
> 13:07:22     "owner": "root",
> 13:07:22     "path": "/var/log/ost-engine-backup",
> 13:07:22     "secontext": "unconfined_u:object_r:var_log_t:s0",
> 13:07:22     "size": 6,
> 13:07:22     "state": "directory",
> 13:07:22     "uid": 0
> 13:07:22 }
>
> 13:07:44 [WARNING]: Invalid characters were found in group names
but not
> replaced, use
> 13:07:44 - to see details
> 13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
> RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4)
doesn't match
> a supported version!
> 13:07:44   RequestsDependencyWarning)
> 13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
> 13:07:44 Start of engine-backup with mode 'backup'
> 13:07:44 scope: all
> 13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
> 13:07:44 log file: /var/log/ost-engine-backup/log.txt
> 13:07:44 Backing up:
> 13:07:44 Notifying engine
> 13:07:44 - Files
> 13:07:44 - Engine database 'engine'
> 13:07:44 - DWH database 'ovirt_engine_history'
> 13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
> 13:07:44 Notifying engineFATAL: failed to backup
/var/lib/grafana/grafana.db
> with sqlite3non-zero return code
> 13:17:47 FAILED___
> Devel mailing list -- devel@ovirt.org 
> To unsubscribe send an email to devel-le...@ovirt.org

> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
> List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/



-- 
Didi

___
Devel mailing list -- devel@ovirt.org 
To unsubscribe send an email to devel-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/
List Archives:

https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HMN2L5DC4V6PLXD3GIHG445N7WVFFR5L/



--
Martin Perina
Manager, Software Engineering
Red Hat Czech s.r.o.


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org

[ovirt-devel] Re: OST fails during 002_bootstrap_pytest

2020-09-23 Thread Marcin Sobczyk



On 9/23/20 4:26 PM, Nir Soffer wrote:

On Wed, Sep 23, 2020 at 4:42 PM Vojtech Juranek  wrote:

Hi,
can anybody look on OST, it fails constantly with error bellow.
See e.g. [1, 2] for full logs.
Thanks
Vojta

[1] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7381/
[2] https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/

13:07:16 ../basic-suite-master/test-scenarios/
002_bootstrap_pytest.py::test_verify_engine_backup [WARNING]: Invalid
characters were found in group names but not replaced, use
13:07:22 - to see details
13:07:22 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:22   RequestsDependencyWarning)
13:07:22 lago-basic-suite-master-engine | CHANGED => {
13:07:22 "changed": true,
13:07:22 "gid": 0,
13:07:22 "group": "root",
13:07:22 "mode": "0755",
13:07:22 "owner": "root",
13:07:22 "path": "/var/log/ost-engine-backup",
13:07:22 "secontext": "unconfined_u:object_r:var_log_t:s0",
13:07:22 "size": 6,
13:07:22 "state": "directory",
13:07:22 "uid": 0
13:07:22 }

13:07:44 [WARNING]: Invalid characters were found in group names but not
replaced, use
13:07:44 - to see details
13:07:44 /usr/lib/python2.7/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.25.10) or chardet (3.0.4) doesn't match
a supported version!
13:07:44   RequestsDependencyWarning)
13:07:44 lago-basic-suite-master-engine | FAILED | rc=1 >>
13:07:44 Start of engine-backup with mode 'backup'
13:07:44 scope: all
13:07:44 archive file: /var/log/ost-engine-backup/backup.tgz
13:07:44 log file: /var/log/ost-engine-backup/log.txt
13:07:44 Backing up:
13:07:44 Notifying engine
13:07:44 - Files
13:07:44 - Engine database 'engine'
13:07:44 - DWH database 'ovirt_engine_history'
13:07:44 - Grafana database '/var/lib/grafana/grafana.db'
13:07:44 Notifying engineFATAL: failed to backup /var/lib/grafana/grafana.db

More descriptive error message can be found here [3]:

2020-09-23 08:16:09 94947: Backing up grafana database to 
/tmp/engine-backup.sHM28RhfZI/tar/db/grafana.db
/usr/bin/engine-backup: line 1098: sqlite3: command not found
2020-09-23 08:16:09 94947: FATAL: failed to backup /var/lib/grafana/grafana.db 
with sqlite3



[3] 
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/7382/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap_pytest.py/lago-basic-suite-master-engine/_var_log/ost-engine-backup/log.txt/*view*/



with sqlite3non-zero return code

Didi, is this related to the new sqlite change?


13:17:47 FAILED___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J7USSMZT3FYOAS4JMC4DKV3QS4CUIR42/

___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TFAO3U33L3EXUGIPU7DW476HUMPKWYJU/


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AK3RVL2ESJVZ7CE625JXBUX3OMLYAKVH/


[ovirt-devel] Host installation is broken across OST suites

2020-08-31 Thread Marcin Sobczyk

Hi,

OST suites seem to be broken, example runs [1][2].
In 'engine.log' [3] there is a problem reported:

2020-08-31 10:54:50,875+02 ERROR 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-2) [568e87b9] Host installation failed 
for host '0524ee6a-815b-4f7c-8ac1-a085b9870325', 
'lago-basic-suite-master-host-1': null
2020-08-31 10:54:50,875+02 DEBUG 
[org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand] 
(EE-ManagedThreadFactory-engine-Thread-2) [568e87b9] Exception: 
java.lang.NullPointerException
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.hostdeploy.InstallVdsInternalCommand.executeCommand(InstallVdsInternalCommand.java:190)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeWithoutTransaction(CommandBase.java:1169)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeActionInTransactionScope(CommandBase.java:1327)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.runInTransaction(CommandBase.java:2003)
at 
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInSuppressed(TransactionSupport.java:140)
at 
org.ovirt.engine.core.utils//org.ovirt.engine.core.utils.transaction.TransactionSupport.executeInScope(TransactionSupport.java:79)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.execute(CommandBase.java:1387)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.CommandBase.executeAction(CommandBase.java:419)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.executor.DefaultBackendActionExecutor.execute(DefaultBackendActionExecutor.java:13)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runAction(Backend.java:442)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runActionImpl(Backend.java:424)
at 
deployment.engine.ear.bll.jar//org.ovirt.engine.core.bll.Backend.runInternalAction(Backend.java:630)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at 
org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.ManagedReferenceMethodInterceptor.processInvocation(ManagedReferenceMethodInterceptor.java:52)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext$Invocation.proceed(InterceptorContext.java:509)
at 
org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.delegateInterception(Jsr299BindingsInterceptor.java:79)
at 
org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.doMethodInterception(Jsr299BindingsInterceptor.java:89)
at 
org.jboss.as.weld.common@19.1.0.Final//org.jboss.as.weld.interceptors.Jsr299BindingsInterceptor.processInvocation(Jsr299BindingsInterceptor.java:102)
at 
org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.interceptors.UserInterceptorFactory$1.processInvocation(UserInterceptorFactory.java:63)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 
org.jboss.as.ejb3@19.1.0.Final//org.jboss.as.ejb3.component.invocationmetrics.ExecutionTimeInterceptor.processInvocation(ExecutionTimeInterceptor.java:43)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 
org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.concurrent.ConcurrentContextInterceptor.processInvocation(ConcurrentContextInterceptor.java:45)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InitialInterceptor.processInvocation(InitialInterceptor.java:40)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.ChainedInterceptor.processInvocation(ChainedInterceptor.java:53)
at 
org.jboss.as.ee@19.1.0.Final//org.jboss.as.ee.component.interceptors.ComponentDispatcherInterceptor.processInvocation(ComponentDispatcherInterceptor.java:52)
at 
org.jboss.invocation@1.5.2.Final//org.jboss.invocation.InterceptorContext.proceed(InterceptorContext.java:422)
at 

[ovirt-devel] Re: Check OVF_STORE volume status task failures

2020-07-21 Thread Marcin Sobczyk

Hi,

this is fixed since some time with [1], but I think that the appliance 
still uses old version of ovirt-engine-wildfly-overlay.
The fixed version is 19.1.0-2 [2]. Nir, could you take a look at it and 
bump the version in the appliance if necessary?


Thanks, Marcin

[1] https://gerrit.ovirt.org/#/c/110324/
[2] https://gerrit.ovirt.org/#/c/110324/1/automation/build-artifacts.sh

On 7/20/20 3:53 PM, Artem Hrechanychenko wrote:

Hi all,
maybe I miss some information, but I'm still have troubles with he 
installation using ost CI.


Is that already fixed ?

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/10415/

https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/10415/artifact/check-patch.he-basic_suite_master.el8.x86_64/test_logs/he-basic-suite-master/post-he_deploy/lago-he-basic-suite-master-host-0/_var_log/ovirt-hosted-engine-setup/ovirt-hosted-engine-setup-20200720061238-soz41s.log

2020-07-20 06:36:04,455-0400 INFO otopi.ovirt_hosted_engine_setup.ansible_utils 
ansible_utils._process_output:111 TASK [ovirt.hosted_engine_setup : Check 
OVF_STORE volume status]
2020-07-20 06:40:22,815-0400 DEBUG otopi.ovirt_hosted_engine_setup.ansible_utils ansible_utils._process_output:105 {'results': [{'cmd': ['vdsm-client', 'Volume', 'getInfo', 'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863', 'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142', 'imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f', 'volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b'], 'stdout': '{\n"apparentsize": "134217728",\n"capacity": "134217728",\n"children": [],\n"ctime": "1595241259",\n"description": "{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk Description\\":\\"OVF_STORE\\"}",\n"disktype": "OVFS",\n"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",\n"format": "RAW",\n"generation": 0,\n"image": "4d2f7009-5b79-4b44-b0ef-e152bc51649f",\n"lease": {\n"offset": 0,\n"owners": [],\n"path": "/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",\n"version": null\n},\n"legality": "LEGAL",\n"mtime": "0",\n"parent": "----",\n"pool": "",\n"status": "OK",\n"truesize": "134217728",\n"type": "PREALLOCATED",\n"uuid": "7b953d0e-662d-4e72-9fdc-823ea867262b",\n"voltype": "LEAF"\n}', 'stderr': '', 'rc': 0, 'start': '2020-07-20 06:38:13.456845', 'end': '2020-07-20 
06:38:13.897280', 'delta': '0:00:00.440435', 'changed': True, 'invocation': {'module_args': {'_raw_params': 'vdsm-client Volume getInfo storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863 storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142 imageID=4d2f7009-5b79-4b44-b0ef-e152bc51649f volumeID=7b953d0e-662d-4e72-9fdc-823ea867262b', 'warn': True, '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['{', '"apparentsize": "134217728",', '"capacity": "134217728",', '"children": [],', '"ctime": "1595241259",', '"description": "{\\"Updated\\":false,\\"Last Updated\\":null,\\"Storage Domains\\":[{\\"uuid\\":\\"6685cca5-f0e1-4831-acdf-6f7b50596142\\"}],\\"Disk Description\\":\\"OVF_STORE\\"}",', '"disktype": "OVFS",', '"domain": "6685cca5-f0e1-4831-acdf-6f7b50596142",', '"format": "RAW",', '"generation": 0,', '"image": "4d2f7009-5b79-4b44-b0ef-e152bc51649f",', '"lease": {', '"offset": 0,', '"owners": [],', '"path": "/rhev/data-center/mnt/lago-he-basic-suite-master-storage:_exports_nfs__he/6685cca5-f0e1-4831-acdf-6f7b50596142/images/4d2f7009-5b79-4b44-b0ef-e152bc51649f/7b953d0e-662d-4e72-9fdc-823ea867262b.lease",', '"version": null', '},', '"legality": "LEGAL",', '"mtime": "0",', '"parent": "----",', '"pool": "",', '"status": "OK",', '"truesize": "134217728",', '"type": "PREALLOCATED",', '
"uuid": "7b953d0e-662d-4e72-9fdc-823ea867262b",', '"voltype": "LEAF"', '}'], 'stderr_lines': [], '_ansible_no_log': False, 'failed': True, 'attempts': 12, 'item': {'name': 'OVF_STORE', 'image_id': '7b953d0e-662d-4e72-9fdc-823ea867262b', 'id': '4d2f7009-5b79-4b44-b0ef-e152bc51649f'}, 'ansible_loop_var': 'item', '_ansible_item_label': {'name': 'OVF_STORE', 'image_id': '7b953d0e-662d-4e72-9fdc-823ea867262b', 'id': '4d2f7009-5b79-4b44-b0ef-e152bc51649f'}}, {'cmd': ['vdsm-client', 'Volume', 'getInfo', 'storagepoolID=03a466fe-ca73-11ea-b77e-5452c0a8c863', 'storagedomainID=6685cca5-f0e1-4831-acdf-6f7b50596142', 'imageID=044e384a-dedf-4589-8dfb-beca170138ee', 'volumeID=033d64fd-6f93-42be-84bc-082b03095ef3'], 'stdout': '{\n

[ovirt-devel] Re: execution failed: javax.net.ssl.SSLPeerUnverifiedException

2020-07-20 Thread Marcin Sobczyk

Hi,

the problem was most probably caused by a regression in httpclient:

https://issues.apache.org/jira/browse/HTTPCLIENT-2047

and should be fixed by now with:

https://gerrit.ovirt.org/#/c/110324/

Regards, Marcin

On 7/15/20 10:05 AM, Martin Perina wrote:

Artur,

could you please add some additional logging into the engine HTTP 
client to find out why apache-http-client complains about the certificate?


Thanks,
Martin


On Wed, Jul 15, 2020 at 9:43 AM Yedidyah Bar David <mailto:d...@redhat.com>> wrote:


On Thu, Jul 9, 2020 at 12:32 PM Marcin Sobczyk
mailto:msobc...@redhat.com>> wrote:
>
> Hi,
>
> On 7/8/20 3:34 PM, Yedidyah Bar David wrote:
> > Did you also get in engine.log
"javax.net.ssl.SSLPeerUnverifiedException"?
> I was also able to reproduce this on my server, but I'm baffled
by this
> one...
> I enabled debug logs on the engine with [1] and got this stack
trace [2],
> but the certs seem ok to me:
>
> 1. I verified the hostname in the certificate by running on the
host:
>
> openssl s_client \
>      -connect 127.0.0.1:54321 <http://127.0.0.1:54321> \
>      -CAfile /etc/pki/vdsm/certs/cacert.pem \
>      -cert /etc/pki/vdsm/certs/vdsmcert.pem \
>      -key /etc/pki/vdsm/keys/vdsmkey.pem \
>      -verify_hostname lago-he-basic-suite-master-host-0.lago.local
>
> 2. curl is also happy:
>
> curl \
>      --cacert /etc/pki/vdsm/certs/cacert.pem \
>      --cert /etc/pki/vdsm/certs/vdsmcert.pem \
>      --key /etc/pki/vdsm/keys/vdsmkey.pem \
> https://lago-he-basic-suite-master-host-0.lago.local:54321
>
> 3. on the hosted engine there is proper entry in '/etc/hosts':
>
> [root@lago-he-basic-suite-master-engine certs]# cat /etc/hosts
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> ::1         localhost localhost.localdomain localhost6
> localhost6.localdomain6
> 192.168.200.3 lago-he-basic-suite-master-host-0.lago.local
> 192.168.222.76 lago-he-basic-suite-master-engine.lago.local #
> hosted-engine-setup-/var/tmp/localvm9k3eqtf7
>
> 4. and dig -x seems to resolve properly:
>
> [root@lago-he-basic-suite-master-engine certs]# dig +short -x
192.168.200.3
> lago-he-basic-suite-master-host-0.lago.local.
>
> If anyone else has some ideas what else could be checked then please
> ping me.
>
> Marcin
>
> [1] https://gerrit.ovirt.org/110211
> [2] http://pastebin.test.redhat.com/882851
>
> >
> > On Wed, Jul 8, 2020 at 4:25 PM Artem Hrechanychenko
mailto:ahrec...@redhat.com>> wrote:
> >> Reproduced locally without using Jenkins
> >>
> >>> [ INFO  ] TASK [ovirt.hosted_engine_setup : Add HE disks]
> >>> [ ERROR ] {'msg': 'Timeout exceed while waiting on result
state of the entity.', 'exception': 'Traceback (most recent call
last):\n  File

"/tmp/ansible_ovirt_disk_28_payload_vtqyyibx/ansible_ovirt_disk_28_payload.zip/ansible/modules/ovirt_disk_28.py",
line 678, in main\n  File

"/tmp/ansible_ovirt_disk_28_payload_vtqyyibx/ansible_ovirt_disk_28_payload.zip/ansible/module_utils/ovirt.py",
line 646, in create\n
poll_interval=self._module.params[\'poll_interval\'],\n File

"/tmp/ansible_ovirt_disk_28_payload_vtqyyibx/ansible_ovirt_disk_28_payload.zip/ansible/module_utils/ovirt.py",
line 364, in wait\n    raise Exception("Timeout exceed while
waiting on result state of the entity.")\nException: Timeout
exceed while waiting on result state of the entity.\n', 'failed':
True, 'invocation': {'module_args': {'name':
'HostedEngineConfigurationImage', 'size': '1GiB', 'format': 'raw',
'sparse': False, 'description': 'Hosted-Engine configuration
disk', 'content_type': 'hosted_engine_configuration', 'interface':
'virtio', 'storage_domain': 'hosted_storage', 'wait': True,
'timeout': 600, 'auth': {'token':

'rAqX1OJIbJyMrA1aWVR-AR54T2lsiBbalN80dWugpfHFBqwiCe4rz3porngvlFSE90k-FEqagPPFboU6ew1hPw',
'url':
'https://lago-he-basic-suite-master-engine.lago.local/ovirt-engine/api',
'ca_file': None, 'insecure': True, 'timeout': 0, 'compress': True,
'kerberos': False, 'headers': None}, 'poll_interval': 3,
'fetch_nested': False, 'nested_attributes': [], 'state':
'present', 'force': False, 'id': None, 'vm_name': None, 'vm_id':
None, 'storage_domains': None, 'profile': None, 'quota_id': None,
'bootable': None, 'shareable': None, 'logical_unit': None,
'download_image_path': None, 'upload_image_path': None,
'sparsify': None, 'openstack_volume_type': None, 

[ovirt-devel] Huge queue for OST check patch job

2020-07-16 Thread Marcin Sobczyk

Hi All,

'check-patch' pipeline for OST has currently a huge queue of jobs to be 
executed,
some stuck since 14 hrs [1]. When testing patches for OST please 
consider trimming
down the triggered pipelines like this [2], and marking the patch with 
"WIP" until it's ready.
Also, when doing a whole series of patches, getting them in more quickly 
will help a lot too.


Thanks, Marcin

[1] 
https://jenkins.ovirt.org/view/oVirt%20system%20tests/job/ovirt-system-tests_standard-check-patch/10336/

[2] https://gerrit.ovirt.org/#/c/110304/6/stdci.yaml
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZYJCKFZ665LZ5FDVVUMJQECEP7GDQAYP/


[ovirt-devel] Import failures for networking vdsm modules in basic and he suite

2020-07-13 Thread Marcin Sobczyk

Hi,

I'm observing some issues with network imports failing. For basic suite 
supervdsmd fails to start with:


l 13 17:24:11 lago-basic-suite-master-host-0 python3[29380]: detected 
unhandled Python exception in '/usr/share/vdsm/supervdsmd'
Jul 13 17:24:11 lago-basic-suite-master-host-0 abrt-server[29382]: Not 
saving repeating crash in '/usr/share/vdsm/supervdsmd'
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]: 
Traceback (most recent call last):
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]:   
File "/usr/share/vdsm/supervdsmd", line 24, in 
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]: 
from vdsm import supervdsm_server
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]:   
File "/usr/lib/python3.6/site-packages/vdsm/supervdsm_server.py", line 
66, in 
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]: 
from vdsm.network.initializer import init_privileged_network_components
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]:   
File "/usr/lib/python3.6/site-packages/vdsm/network/initializer.py", 
line 32, in 
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]: 
from vdsm.network import nmstate
Jul 13 17:24:11 lago-basic-suite-master-host-0 daemonAdapter[29380]: 
ImportError: cannot import name 'nmstate'


While deploying he basic suite I can see smth like (this one's a bit 
weird cause it's extracted from ansible logs, so might be trimmed):


File \"/usr/libexec/vdsm/vm_libvirt_hook.py\", line 29, in 
    from vdsm.virt.vmdevices import storage
  File 
\"/usr/$ib/python3.6/site-packages/vdsm/virt/vmdevices/__init__.py\", 
line 27, in 

    from . import graphics
  File 
\"/usr/lib/python3.6/site-packages/vdsm/virt/vmdevices$graphics.py\", 
line 27, in 

    from vdsm.virt import displaynetwork
  File 
\"/usr/lib/python3.6/site-packages/vdsm/virt/displaynetwork.py\", line 
23, in \$    from vdsm.network import api as net_api
  File \"/usr/lib/python3.6/site-packages/vdsm/network/api.py\", line 
34, in 

    from vdsm.network import netswitch
 File 
\"/usr/lib/python3.6/site-packages/vdsm/network/netswitch/__init__.py\", 
line 23, in 

    from . import configurator
  File \"/usr/lib/python3.6/site-packages$vdsm/

Ales, could you take a look?

Regards, Marcin
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4UP3E2GSZBRXK3Y6OFTEGCBHZQ2TXP3C/


[ovirt-devel] Re: execution failed: javax.net.ssl.SSLPeerUnverifiedException (was: vdsm.storage.exception.UnknownTask: Task id unknown

2020-07-09 Thread Marcin Sobczyk
-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] START,
  UploadStreamVDSCommand(HostName =
didi-centos8-host.lab.eng.tlv2.redhat.com,
UploadStreamVDSCommandParameters:{hostId='a4fc6701-e2c7-4770-896a-d0ee74f9c7b8'}),
log id: 674791e5
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] -- exe
cuteVdsBrokerCommand, parameters:
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ spU
UID=b9dccefe-bc61-11ea-8ebe-001a4a231728
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ sdU
UID=e102d7b5-1a37-490f-a3e7-20e56c37791f
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ ima
geGUID=db934a98-4111-4faf-8cb9-6b36928cd61c
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ vol
UUID=f898c40e-1f88-48db-b59b-f2c73162ddb7
2020-07-02 18:01:55,914+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ siz
e=23552
2020-07-02 18:01:56,419+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.UploadStreamVDSCommand]Signed-off-by: 
Marcin Sobczyk 
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] FINISH
, UploadStreamVDSCommand, return: , log id: 674791e5
2020-07-02 18:01:58,732+03 INFO
[org.ovirt.engine.core.bll.tasks.CommandAsyncTask]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] CommandAsyncTask::Adding CommandMultiAsyncTasks object for
command 'ed1ff9b8-8cfd-489b-9cad-f078029a3cc1'
2020-07-02 18:01:58,732+03 INFO
[org.ovirt.engine.core.bll.CommandMultiAsyncTasks]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] CommandMultiAsyncTasks::attachTask: Attaching task
'997accaf-aa33-4632-a0bf-24d59a637255' to command
'ed1ff9b8-8cfd-489b-9cad-f078029a3cc1'.
2020-07-02 18:01:58,937+03 INFO
[org.ovirt.engine.core.bll.tasks.AsyncTaskManager]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] Adding task '997accaf-aa33-4632-a0bf-24d59a637255' (Parent
Command 'UploadStream', Parameters Type
'org.ovirt.engine.core.common.asynctasks.AsyncTaskParameters'),
polling hasn't started yet..
2020-07-02 18:01:58,963+03 INFO
[org.ovirt.engine.core.bll.tasks.SPMAsyncTask]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] BaseAsyncTask::startPollingTask: Starting to poll task
'997accaf-aa33-4632-a0bf-24d59a637255'.
2020-07-02 18:01:58,973+03 INFO
[org.ovirt.engine.core.bll.storage.ovfstore.UploadStreamCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] Lock freed to object 'EngineLock:{exclusiveLocks='',
sharedLocks='[a4fc6701-e2c7-4770-896a-d0ee74f9c7b8=VDS_EXECUTION]'}'
2020-07-02 18:01:58,979+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] START, SetVolumeDescriptionVDSCommand(
SetVolumeDescriptionVDSCommandParameters:{storagePoolId='b9dccefe-bc61-11ea-8ebe-001a4a231728',
ignoreFailoverLimit='false',
storageDomainId='e102d7b5-1a37-490f-a3e7-20e56c37791f',
imageGroupId='db934a98-4111-4faf-8cb9-6b36928cd61c',
imageId='f898c40e-1f88-48db-b59b-f2c73162ddb7'}), log id: 5cea0ad3
2020-07-02 18:01:58,979+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] -- executeIrsBrokerCommand: calling 'setVolumeDescription',
parameters:
2020-07-02 18:01:58,979+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ spUUID=b9dccefe-bc61-11ea-8ebe-001a4a231728
2020-07-02 18:01:58,980+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ sdUUID=e102d7b5-1a37-490f-a3e7-20e56c37791f
2020-07-02 18:01:58,980+03 INFO
[org.ovirt.engine.core.vdsbroker.irsbroker.SetVolumeDescriptionVDSCommand]
(EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-84)
[2b0721d8] ++ imageGroupGUID=db934a98-4111-4faf-8cb9-6b36928cd61c
2020-07-02 1

  1   2   3   >