[ovirt-devel] Re: OST check-patch jobs are stuck on archive artifacts stage

2020-11-08 Thread Amit Bawer
Thanks. Maybe the jobs is starved? Is it last comes first served basis in
the CI queue?

On Sun, Nov 8, 2020 at 1:57 PM Ehud Yonasi  wrote:

> Hi,
>
> The 2 remaining jobs are waiting for el8 host to be available and
> currently there are only 3 of those, so it's creating a bottleneck.
>
> Regards,
> Ehud.
>
> On Sun, Nov 8, 2020 at 12:46 PM Amit Bawer  wrote:
>
>> Hi,
>>
>> Exemplified here, started last night and still hasn't finished archiving:
>>
>> https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/13195/console
>>
>> Thanks.
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WDOYYLFSE3NCVPOAFV2HNXXJ4M255B7H/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YCUQMYDGVRKSQRT43UTCB2FDPV7JNQN5/


[ovirt-devel] OST check-patch jobs are stuck on archive artifacts stage

2020-11-08 Thread Amit Bawer
Hi,

Exemplified here, started last night and still hasn't finished archiving:
https://jenkins.ovirt.org/job/ovirt-system-tests_standard-check-patch/13195/console

Thanks.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WDOYYLFSE3NCVPOAFV2HNXXJ4M255B7H/


[ovirt-devel] Re: el8 CI breaking on iprocess mirrors

2020-07-20 Thread Amit Bawer
On Sun, Jul 19, 2020 at 11:56 PM Nir Soffer  wrote:

> On Sun, Jul 19, 2020 at 2:20 PM Amit Bawer  wrote:
> >
> > Hi
> > This can be seen on [1]
> >
> > 13:47:57  Error: Error downloading packages:
> > 13:47:57Cannot download
> x86_64/ioprocess-1.4.1-1.202007151811.gitc41863d.el8.x86_64.rpm: All
> mirrors were tried
>
> I think this was caused by ovirt-master-snaphost breakage today. Do
> you still see these failures?
>

CI is back to normal now, Thanks

>
> >
> > [1]
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/22702/consoleFull
> >
> > Thanks
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5R4GEQBP3SCBHDEOMSMLN77MSOBABLZP/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TXT6S6BXJ2AZHX2KEKVGWQEMJREVFW5R/


[ovirt-devel] el8 CI breaking on iprocess mirrors

2020-07-19 Thread Amit Bawer
Hi
This can be seen on [1]

13:47:57  Error: Error downloading packages:
13:47:57Cannot download
x86_64/ioprocess-1.4.1-1.202007151811.gitc41863d.el8.x86_64.rpm: All
mirrors were tried

[1]
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/22702/consoleFull

Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5R4GEQBP3SCBHDEOMSMLN77MSOBABLZP/


[ovirt-devel] Re: oVirt vdsm Bulk-API deep dive

2020-07-09 Thread Amit Bawer
On Thu, Jul 9, 2020 at 7:07 PM Amit Bawer  wrote:

> Thanks to all who joined,
>
> Recording link
>
> https://redhat.bluejeans.com/playback/guid/MjEyNzgxODM2OjQzMzI0Ny1kODllYTU5OS1lMWE1LTQwNDUtYTI0NC00NjdmZGVjZjM1NGE=?s=vl
>
publink: https://bluejeans.com/s/SaMK_/

>
> Slides are attached.
>
> On Wed, Jul 8, 2020 at 11:08 PM Nir Soffer  wrote:
>
>> I want to share an interesting talk on the new StorageDomain dump API.
>>
>> When:
>> Thursday, July 9⋅16:00 – 16:45 (Israel time)
>>
>> Where:
>> https://redhat.bluejeans.com/212781836
>>
>> Who:
>> Amit Bawer
>>
>> StorageDomain dump API is a faster way to query oVirt storage domain
>> contents, optimized
>> for bulk operations such as collecting sos reports or importing
>> storage domains during disaster
>> recovery.
>>
>> In recent tests we found that it is 295 times faster compared with
>> vdsm-tool dump-volume-chains:
>> https://bugzilla.redhat.com/show_bug.cgi?id=1557147#c26
>>
>> What can you do with this API?
>>
>> Query volumes metadata by disk id:
>>
>> # vdsm-client StorageDomain dump
>> sd_id=56ecc03c-4bb5-4792-8971-3c51ea924d2e | jq '.volumes | .[] |
>> select(.image=="d7ead22a-0fbf-475c-a62f-6f7bc8473acb")'
>> {
>>   "apparentsize": 6442450944,
>>   "capacity": 6442450944,
>>   "ctime": 1593128884,
>>   "description":
>> "{\"DiskAlias\":\"fedora-32.raw\",\"DiskDescription\":\"Uploaded
>> disk\"}",
>>   "disktype": "DATA",
>>   "format": "RAW",
>>   "generation": 0,
>>   "image": "d7ead22a-0fbf-475c-a62f-6f7bc8473acb",
>>   "legality": "LEGAL",
>>   "parent": "----",
>>   "status": "OK",
>>   "truesize": 6442455040,
>>   "type": "PREALLOCATED",
>>   "voltype": "LEAF"
>> }
>>
>> Find all templates:
>>
>> # vdsm-client StorageDomain dump
>> sd_id=56ecc03c-4bb5-4792-8971-3c51ea924d2e | jq '.volumes | .[] |
>> select(.voltype=="SHARED")'
>> {
>>   "apparentsize": 6442450944,
>>   "capacity": 6442450944,
>>   "ctime": 1593116563,
>>   "description":
>> "{\"DiskAlias\":\"fedora-32.raw\",\"DiskDescription\":\"Uploaded
>> disk\"}",
>>   "disktype": "DATA",
>>   "format": "RAW",
>>   "generation": 0,
>>   "image": "9b62b5fa-920e-4d0c-baf6-40406106e48e",
>>   "legality": "LEGAL",
>>   "parent": "----",
>>   "status": "OK",
>>   "truesize": 6442516480,
>>   "type": "PREALLOCATED",
>>   "voltype": "SHARED"
>> }
>>
>> Please join the talk if you want to learn more.
>>
>> Cheers,
>> Nir
>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7SVYQ4CJBT673JHPPZT7LZ2NY2YP5R4G/


[ovirt-devel] Re: Bug 1522926 - [RFE] Integrate lvm filter configuration in vdsm-tool configure step

2020-06-23 Thread Amit Bawer
I am not sure how "vdsm-tool config-lvm-filter -y" should be carried as
part of host-deployment as it is not part of the configurators executed by
"vdsm-tool configure".
Should it be another task in ansible or an addition to host deploy
package[1] after doing configuration?

[1]
https://github.com/oVirt/ovirt-host-deploy/blob/master/src/plugins/ovirt-host-deploy/vdsm/packages.py#L138

On Tue, Jun 23, 2020 at 4:30 PM Nir Soffer  wrote:

> On Tue, Jun 23, 2020 at 4:28 PM Yedidyah Bar David 
> wrote:
> >
> > On Tue, Jun 23, 2020 at 4:11 PM Amit Bawer  wrote:
> > >
> > >
> > >
> > > On Tue, Jun 23, 2020 at 2:55 PM Nir Soffer  wrote:
> > >>
> > >> On Tue, Jun 23, 2020 at 2:47 PM Tal Nisan  wrote:
> > >> >
> > >> > BTW: Nir wrote somewhere that it can be done in a day so it
> shouldn't be a problem ;)
> > >> >
> > >> > On Tue, Jun 23, 2020 at 2:02 PM Tal Nisan 
> wrote:
> > >> >>
> > >> >> Hey guys,
> > >> >> I've talked to Michal and we have to get this change in 4.4.1 so
> we'll need to start to work on it ASAP, I've asked Amit to take it so let's
> try and understand together what we need to do here
> > >>
> > >> I think we should:
> > >>
> > >> - fix the exit code of the tool, currently it always exits with 0, so
> > >> there is no way to handle errors.
> > >>   I commented about it in the bug
> > >>
> > >> - run the tool when deploying a host, after or before we run
> > >> "vdsm-tool configure --force"
> > >>   I don't know where is the code running when deploying a host, it may
> > >> be in some ansible
> > >>   script. Best ask on devel and CC Didi.
> > >
> > >
> > > Probably invoked from playbook for ovirt-host-deploy.yml, calling the
> following package code:
> > >
> https://github.com/oVirt/ovirt-host-deploy/blob/master/src/plugins/ovirt-host-deploy/vdsm/packages.py#L138
> > >
> > > +Yedidyah Bar David  could you confirm?
> >
> > Yes, AFAIK - but this code is maintained by Infra team, not
> > Integration. Adding Dana, who is the main/original author (and
> > probably main maintainer?).
>
> Great.
>
> Amit, please continue the technical discussion on devel.
>
> > > From ovirt-host-deploy log in engine:
> > >
> > > 2020-05-25 16:49:32 EDT - TASK [ovirt-host-deploy-vdsm : Verify
> minimum vdsm version exists] *
> > > 2020-05-25 16:49:32 EDT - TASK [ovirt-host-deploy-vdsm : Reconfigure
> vdsm tool] **
> > > 2020-05-25 16:50:08 EDT - changed: [10.35.18.187]
> > > 2020-05-25 16:50:08 EDT - {
> > >   "status" : "OK",
> > >   "msg" : "",
> > >   "data" : {
> > > "uuid" : "13daab0a-a4d8-4a67-8266-3a350efeb36a",
> > > "counter" : 52,
> > > "stdout" : "changed: [10.35.18.187]",
> > > "start_line" : 46,
> > > "end_line" : 47,
> > > "runner_ident" : "2ef6527a-9ec9-11ea-82a6-525400200635",
> > > "event" : "runner_on_ok",
> > > "pid" : 30013,
> > > "created" : "2020-05-25T20:50:06.341271",
> > > "parent_uuid" : "52540020-0635-d1d2-2820-0199",
> > > "event_data" : {
> > >   "playbook" : "ovirt-host-deploy.yml",
> > >   "playbook_uuid" : "b9cdaec8-402d-4705-81d0-973775a69e18",
> > >   "play" : "all",
> > >   "play_uuid" : "52540020-0635-d1d2-2820-0006",
> > >   "play_pattern" : "all",
> > >   "task" : "Reconfigure vdsm tool",
> > >   "task_uuid" : "52540020-0635-d1d2-2820-0199",
> > >   "task_action" : "command",
> > >   "task_args" : "",
> > >   "task_path" :
> "/usr/share/ovirt-engine/ansible-runner-service-project/project/roles/ovirt-host-deploy-vdsm/tasks/packages.yml:18",
> > >   "role" : "ovirt-host-deploy-vdsm",
> > >   "host" : "10.35.18.187",
> > >   "remote_addr" : "10.35.18.

[ovirt-devel] Re: Proposing Ales Musil as VDSM network maintainer

2020-06-23 Thread Amit Bawer
+1

On Tue, Jun 23, 2020 at 2:53 PM Petr Horacek  wrote:

>
>
> út 23. 6. 2020 v 13:32 odesílatel Nir Soffer  napsal:
>
>> On Mon, Jun 22, 2020 at 1:13 PM Marcin Sobczyk 
>> wrote:
>> > On 6/22/20 11:00 AM, Edward Haas wrote:
>> >
>> > Hello to all VDSM maintainers,
>>
>> These discussions should be public.
>> Adding devel@ovirt.org
>>
>> > I hope I did not miss any active maintainers.
>> >
>> > I'd like to nominate Ales Musil to formally take over the vdsm-network
>> vertical as a maintainer
>> > after acting as such in the last period.
>> >
>> > Would you approve adding him to the vdsm-master-maintainers
>> > https://gerrit.ovirt.org/#/admin/groups/106,members list,
>> > so he will not be blocked on my availability?
>> >
>> > +1
>>
>> I don't see any problem, so +1.
>>
>
> +1, well deserved.
>
>
>>
>> Let's wait a few days for more feedback on devel.
>>
>> Nir
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UH3YAS67MICAL34RHH6QZVC5Y3PRZ3OF/
>>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V62Z27EJJQJB5PXMZSKQVQZ6RXRJNBCE/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/UQ7GJAFKBH2RNW2IYOGJAR7XTXD3JT3L/


[ovirt-devel] OST for 4.3 is broken on build stage

2020-06-22 Thread Amit Bawer
Hi

Lookup for el7 repo fails:


[2020-06-22T07:46:48.800Z] ERROR: Command failed:
[2020-06-22T07:46:48.800Z]  # /usr/bin/yum --installroot
/var/lib/mock/epel-7-ppc64le-60da95f193baa5a745076a0c6b674c73-bootstrap-6968/root/
--releasever 7 install yum yum-utils --setopt=tsflags=nocontexts
[2020-06-22T07:46:48.800Z] Failed to set locale, defaulting to C
[2020-06-22T07:46:48.800Z]
http://cbs.centos.org/repos/virt7-ovirt-common-candidate/ppc64le/os/repodata/repomd.xml:
[Errno 14] HTTPS Error 404 - Not Found
[2020-06-22T07:46:48.800Z] Trying other mirror.
[2020-06-22T07:46:48.800Z] To address this issue please refer to the below
wiki article
[2020-06-22T07:46:48.800Z]


Taken from
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/21966/consoleText

Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4ISED6KAQ6SM64EVTDXCLRGRTFL7Y6KO/


[ovirt-devel] Vdsm CI for el8 is broken

2020-06-21 Thread Amit Bawer
Hi

Seems that in the last few days the el8 CI for vdsm cannot run, were there
any el8 repo changes?

[2020-06-21T09:20:03.003Z] Mock Version: 2.2
[2020-06-21T09:20:03.003Z] INFO: Mock Version: 2.2
[2020-06-21T09:20:03.987Z] Start: dnf install
[2020-06-21T09:20:12.326Z] ERROR: Command failed:
[2020-06-21T09:20:12.326Z]  # /usr/bin/dnf --installroot
/var/lib/mock/epel-8-x86_64-8e9eeb575ab4da7bf0fbfdc80a25b9c0-30232/root/
--releasever 8 --setopt=deltarpm=False --allowerasing --disableplugin=local
--disableplugin=spacewalk --disableplugin=local --disableplugin=spacewalk
install dnf tar gcc-c++ redhat-rpm-config which xz sed make bzip2 gzip gcc
coreutils unzip shadow-utils diffutils cpio bash gawk rpm-build info patch
util-linux findutils grep python36 autoconf automake createrepo dnf
dnf-utils e2fsprogs gcc gdb git iproute-tc iscsi-initiator-utils
libguestfs-tools-c lshw make openvswitch ovirt-imageio-common
python3-augeas python3-blivet python3-coverage python3-dateutil
python3-dbus python3-decorator python3-devel python3-dmidecode
python3-inotify python3-ioprocess-1.4.1 python3-libselinux python3-libvirt
python3-magic python3-netaddr python3-nose python3-pip
python3-policycoreutils python3-pyyaml python3-requests python3-sanlock
python3-six python3-yaml rpm-build rpmlint sanlock sudo systemd-udev
xfsprogs --setopt=tsflags=nocontexts
[2020-06-21T09:20:12.326Z] No matches found for the following disable
plugin patterns: local, spacewalk
[2020-06-21T09:20:12.326Z] Last metadata expiration check: 0:00:02 ago on
Sun Jun 21 09:20:07 2020.
[2020-06-21T09:20:12.326Z] No match for argument: openvswitch
[2020-06-21T09:20:12.326Z] Error: Unable to find a match: openvswitch

Taken from
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/21966/consoleText

Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/OAA7WZAUVNXMELIXPCZR45R6ECDC6PRE/


[ovirt-devel] Re: [ovirt-users] POWER9 Support: VDSM requiring LVM2 package that's missing

2020-05-14 Thread Amit Bawer
+devel 

On Fri, May 15, 2020 at 2:29 AM Amit Bawer  wrote:

> yes it correlates to the bug mentioned,
> currently the rpm for vdsm-4.30.46 is available at 4.3 pre-release repo
> https://resources.ovirt.org/pub/ovirt-4.3-pre/rpm/el7/ppc64le/
> repodata is one level up
>
> On Fri, May 15, 2020 at 2:26 AM Vinícius Ferrão 
> wrote:
>
>> Hi Amit, I think I found the answer: It’s not available yet.
>>
>> https://bugzilla.redhat.com/show_bug.cgi?id=1829348
>>
>> It's this bug right?
>>
>> Thanks,
>>
>> On 14 May 2020, at 20:14, Vinícius Ferrão 
>> wrote:
>>
>> Hi Amit, thanks for confirming.
>>
>> Do you know in which repository VDSM 4.30.36 is available?
>>
>> It’s not available on any of both:
>> rhel-7-for-power-9-rpms/ppc64le
>>  Red Hat Enterprise Linux 7 for POWER9 (RPMs)
>>   9,156
>> rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms/ppc64le Red Hat
>> Virtualization 4 Management Agents (for RHEL 7 Server for IBM POWER9   814
>>
>>
>> Thank you!
>>
>>
>> On 14 May 2020, at 20:09, Amit Bawer  wrote:
>>
>>
>> On Fri, May 15, 2020 at 12:19 AM Vinícius Ferrão via Users <
>> us...@ovirt.org> wrote:
>>
>>> Hello,
>>>
>>> I would like to know if this is a bug or not, if yes I will submit to
>>> Red Hat.
>>>
>> Fixed on vdsm-4.30.46
>>
>>>
>>> I’m trying to add a ppc64le (POWER9) machine to the hosts pool, but
>>> there’s missing dependencies on VDSM:
>>>
>>> --> Processing Dependency: lvm2 >= 7:2.02.186-7.el7_8.1 for package:
>>> vdsm-4.30.44-1.el7ev.ppc64le
>>> --> Finished Dependency Resolution
>>> Error: Package: vdsm-4.30.44-1.el7ev.ppc64le
>>> (rhel-7-server-rhv-4-mgmt-agent-for-power-9-rpms)
>>>Requires: lvm2 >= 7:2.02.186-7.el7_8.1
>>>Available: 7:lvm2-2.02.171-8.el7.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.171-8.el7
>>>Available: 7:lvm2-2.02.177-4.el7.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.177-4.el7
>>>Available: 7:lvm2-2.02.180-8.el7.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-8.el7
>>>Available: 7:lvm2-2.02.180-10.el7_6.1.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.1
>>>Available: 7:lvm2-2.02.180-10.el7_6.2.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.2
>>>Available: 7:lvm2-2.02.180-10.el7_6.3.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.3
>>>Available: 7:lvm2-2.02.180-10.el7_6.7.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.7
>>>Available: 7:lvm2-2.02.180-10.el7_6.8.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.8
>>>Installing: 7:lvm2-2.02.180-10.el7_6.9.ppc64le
>>> (rhel-7-for-power-9-rpms)
>>>lvm2 = 7:2.02.180-10.el7_6.9
>>>
>>>
>>> Thanks,
>>>
>>> ___
>>> Users mailing list -- us...@ovirt.org
>>> To unsubscribe send an email to users-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/privacy-policy.html
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/I3YDM2VN7K2GHNLNLWCEXZRSAHI4F4L7/
>>>
>>
>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RSE3OFCUWDCMKQPE4QDRE46JB6WNRAS3/


[ovirt-devel] Vdsm CI is missing glusterfs repo

2020-05-04 Thread Amit Bawer
Example from
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/20818/consoleText

[2020-05-04T06:40:08.041Z] Failed to download metadata for repo 'glusterfs'
[2020-05-04T06:40:08.041Z] Error: Failed to download metadata for repo
'glusterfs'

Suggesting: https://gerrit.ovirt.org/#/c/108782/

Thanks.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HZOMQBETRF3KRYAPLSEKIDGX6BVS6VPT/


[ovirt-devel] OST fails for unreachable repo 'alocalsync'

2020-03-30 Thread Amit Bawer
Hi

OST fails for unreachable repo 'alocalsync',

Seen at
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6706/testReport/basic-suite-master.test-scenarios/002_bootstrap_pytest/test_configure_storage/


Thanks
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/X3QWBZLCKX47BCOLQ425VSJ2GO6I2V3Z/


[ovirt-devel] Re: OST basic suite fails on 002_bootstrap.add_secondary_storage_domains

2020-03-10 Thread Amit Bawer
Seems like a reproduce of
https://bugzilla.redhat.com/show_bug.cgi?id=1807050#c1

Snipped from
https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/21146/artifact/basic-suite.el7.x86_64/test_logs/basic-suite-master/post-002_bootstrap.py/lago-basic-suite-master-host-1/_var_log/vdsm/vdsm.log
:

2020-03-10 05:59:18,549-0400 ERROR (jsonrpc/3) [storage.LVM] vg
cceb9d83-7b76-4840-a189-c82f3c18760e has pv_count 2 but pv_names
('/dev/mapper/3600140544bef7e411164e5f94e13b5d8',) (lvm:578)
2020-03-10 05:59:18,551-0400 INFO  (jsonrpc/3) [storage.StorageDomain]
sdUUID=cceb9d83-7b76-4840-a189-c82f3c18760e (blockSD:1192)
2020-03-10 05:59:18,551-0400 DEBUG (jsonrpc/3) [common.commands]
/usr/bin/taskset --cpu-list 0-1 /usr/bin/sudo -n /sbin/lvm vgck --config
'devices {  preferred_names=["^/dev/mapper/"]  ignore_suspended_devices=1
 write_cache_state=0  disable_after_error_count=3
 filter=["a|^/dev/mapper/3600140544bef7e411164e5f94e13b5d8$|", "r|.*|"]
 hints="none" } global {  locking_type=1  prioritise_write_locks=1
 wait_for_locks=1  use_lvmetad=0 } backup {  retain_min=50  retain_days=0
}' cceb9d83-7b76-4840-a189-c82f3c18760e (cwd None) (commands:153)
2020-03-10 05:59:18,634-0400 DEBUG (jsonrpc/3) [common.commands] FAILED:
 = b"  WARNING: Couldn't find device with uuid
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\n  WARNING: VG
cceb9d83-7b76-4840-a189-c82f3c18760e is missing PV
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\n  The volume group is missing 1
physical volumes.\n";  = 5 (commands:185)
2020-03-10 05:59:18,637-0400 INFO  (jsonrpc/3) [vdsm.api] FINISH
getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: ('cceb9d83-7b76-4840-a189-c82f3c18760e: ["  WARNING:
Couldn\'t find device with uuid FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.",
\'  WARNING: VG cceb9d83-7b76-4840-a189-c82f3c18760e is missing PV
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\', \'  The volume group is missing
1 physical volumes.\']',) from=:::192.168.201.4,47796,
flow_id=5f02a1ec-db37-470d-b329-41b22f23582b,
task_id=9be86ca4-49ac-47ea-b0e2-8182e33924ff (api:52)
2020-03-10 05:59:18,637-0400 ERROR (jsonrpc/3) [storage.TaskManager.Task]
(Task='9be86ca4-49ac-47ea-b0e2-8182e33924ff') Unexpected error (task:880)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/task.py", line 887,
in _run
return fn(*args, **kargs)
  File "", line 2, in getStorageDomainInfo
  File "/usr/lib/python3.6/site-packages/vdsm/common/api.py", line 50, in
method
ret = func(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 2752,
in getStorageDomainInfo
dom = self.validateSdUUID(sdUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/hsm.py", line 310, in
validateSdUUID
sdDom.validate()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/blockSD.py", line
1193, in validate
lvm.chkVG(self.sdUUID)
  File "/usr/lib/python3.6/site-packages/vdsm/storage/lvm.py", line 1278,
in chkVG
raise se.StorageDomainAccessError("%s: %s" % (vgName, err))
vdsm.storage.exception.StorageDomainAccessError: Domain is either partially
accessible or entirely inaccessible:
('cceb9d83-7b76-4840-a189-c82f3c18760e: ["  WARNING: Couldn\'t find device
with uuid FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.", \'  WARNING: VG
cceb9d83-7b76-4840-a189-c82f3c18760e is missing PV
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\', \'  The volume group is missing
1 physical volumes.\']',)
2020-03-10 05:59:18,637-0400 INFO  (jsonrpc/3) [storage.TaskManager.Task]
(Task='9be86ca4-49ac-47ea-b0e2-8182e33924ff') aborting: Task is aborted:
'value=Domain is either partially accessible or entirely inaccessible:
(\'cceb9d83-7b76-4840-a189-c82f3c18760e: ["  WARNING: Couldn\\\'t find
device with uuid FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.", \\\'  WARNING:
VG cceb9d83-7b76-4840-a189-c82f3c18760e is missing PV
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\\\', \\\'  The volume group is
missing 1 physical volumes.\\\']\',) abortedcode=379' (task:1190)
2020-03-10 05:59:18,638-0400 ERROR (jsonrpc/3) [storage.Dispatcher] FINISH
getStorageDomainInfo error=Domain is either partially accessible or
entirely inaccessible: ('cceb9d83-7b76-4840-a189-c82f3c18760e: ["  WARNING:
Couldn\'t find device with uuid FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.",
\'  WARNING: VG cceb9d83-7b76-4840-a189-c82f3c18760e is missing PV
FH6lfD-DZus-6Ndn-tkr8-5Hsy-lt2c-CDRPDU.\', \'  The volume group is missing
1 physical volumes.\']',) (dispatcher:83)


Suggest to try again once the BZ is fixed on master.

On Tue, Mar 10, 2020 at 1:36 PM Yedidyah Bar David  wrote:
>
> Hi all,
>
> Anyone looking at this?
>
> See e.g.:
>
> https://jenkins.ovirt.org/job/ovirt-master_change-queue-tester/21146/
>
> Thanks,
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
https:

[ovirt-devel] Re: Proposing Benny Zlotnik as an Engine Storage maintainer

2020-02-22 Thread Amit Bawer
+1

On Friday, February 21, 2020, Tal Nisan  wrote:

> Hi everyone,
> Benny joined the Storage team in November 2016 and since then played a key
> role in investigating very complex customer bugs around oVirt engine as
> well as contributing to features such as DR, Cinberlib, new export/import
> mechanism and also rewrote and still maintains the LSM mechanism.
> Given his big contribution and knowledge around the engine I'd like to
> nominate Benny as an engine storage maintainer.
>
> Your thoughts please.
> Tal.
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LZ7WRTUR3PBAOX3WWDLWJWCPUD5ZHHZ7/


[ovirt-devel] do we even handle volume metadata in the engine?

2020-02-18 Thread Amit Bawer
You could check Volume.getInfo output:


# vdsm-client Volume getInfo storagepoolID=None
storagedomainID=91630622-c645-4397-a9fe-9ddf26690500
imageID=9f36c5ff-2ed1-4d1a-a7ad-365e5e1fb7b6
volumeID=4093e21a-73f7-451a-90d1-2b8d41685164
{
"apparentsize": "3489660928",
"capacity": "6442450944",
"children": [],
"ctime": "1581807371",
"description":
"{\"DiskAlias\":\"fedora-30.qcow2\",\"DiskDescription\":\"Uploaded disk\"}",
"disktype": "DATA",
"domain": "91630622-c645-4397-a9fe-9ddf26690500",
"format": "COW",
"generation": 0,
"image": "9f36c5ff-2ed1-4d1a-a7ad-365e5e1fb7b6",
"lease": {
"offset": 105906176,
"owners": [],
"path": "/dev/91630622-c645-4397-a9fe-9ddf26690500/leases",
"version": null
},
"legality": "LEGAL",
"mtime": "0",
"parent": "----",
"pool": "",
"status": "OK",
"truesize": "3489660928",
"type": "SPARSE",
"uuid": "4093e21a-73f7-451a-90d1-2b8d41685164",
"voltype": "LEAF"
}
On Tuesday, February 18, 2020, Fedor Gavrilov  wrote:

> Hi,
>
> It seems I was able to get my setup working for NFS storage, thanks for
> your advices!
>
> Now I'm afraid I am stuck again: what I need to do is to add validation
> for volume metadata to a certain command (to be exact, that VOLTYPE is
> LEAF). But I can't find examples of us dealing with this data in the engine
> part at all. I understand this is in the end executed by VDSM, but
> nevertheless - in what format are we even reading and writing volume
> metadata? Even class/method name would be helpful.
>
> Thanks,
> Fedor
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/communit
> y/about/community-guidelines/
> List Archives: https://lists.ovirt.org/archiv
> es/list/devel@ovirt.org/message/EBNNYCDNFV353HN4HZK3XSIX5ABC/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZYJPI2WLIWKTKU7YUHM6VFCQXZPQWLIS/


[ovirt-devel] Re: CI: test_qcow2_to_raw_preallocated is flaky

2020-02-16 Thread Amit Bawer
On Sun, Feb 16, 2020 at 3:53 PM Nir Soffer  wrote:

> On Sun, Feb 16, 2020 at 11:52 AM Amit Bawer  wrote:
> >
> > Hi,
> >
> > This occurs on CI every now and then,
> > taken from:
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/18302//artifact/check-patch.tests-py3.el8.x86_64/mock_logs/script/stdout_stderr.log
> >
> > Thanks
> >
> >
> >
> >  TestConvertPreallocation.test_qcow2_to_raw_preallocated[full]
> _
> >
> > self =  0x7f9bac7c3ef0>
> > preallocation = 'full'
> >
> > @pytest.mark.parametrize("preallocation", [
> > qemuimg.PREALLOCATION.FALLOC,
> > qemuimg.PREALLOCATION.FULL,
> > ])
> > def test_qcow2_to_raw_preallocated(self, preallocation):
> > virtual_size = 10 * MiB
> > with namedTemporaryDir() as tmpdir:
> > src = os.path.join(tmpdir, 'src')
> > dst = os.path.join(tmpdir, 'dst')
> >
> > op = qemuimg.create(src, size=virtual_size, format="qcow2")
> > op.run()
> >
> > op = qemuimg.convert(src, dst, srcFormat="qcow2",
> dstFormat="raw",
> >  preallocation=preallocation)
> > op.run()
> > >   check_raw_preallocated_image(dst, virtual_size)
> >
> > storage/qemuimg_test.py:561:
> > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> _ _ _ _
> >
> > path = '/var/tmp/tmpxr0emprz/dst', virtual_size = 10485760
> >
> > def check_raw_preallocated_image(path, virtual_size):
> > image_stat = os.stat(path)
> > assert image_stat.st_size == virtual_size
> > >   assert image_stat.st_blocks * 512 == virtual_size
> > E   assert (20488 * 512) == 10485760
> > E+  where 20488 = os.stat_result(st_mode=33188, st_ino=411528,
> st_dev=2049, st_nlink=1, st_uid=0, st_gid=0, st_size=10485760,
> st_atime=1581845207, st_mtime=1581845207, st_ctime=1581845207).st_blocks
>
> Depending on the filesystem, the file system may report more blocks
> than expected.
>

In that case, shouldn't it happen on every test run? this only happens on
part of the time.


> We can change the assert to:
>
> assert image_stat.st_blocks * 512 >= virtual_size
>
> In qemu iotests this is solved in a more precise way:
>
> https://github.com/qemu/qemu/blob/b29c3e23f64938784c42ef9fca896829e3c19120/tests/qemu-iotests/175#L82
>
> https://github.com/qemu/qemu/blob/b29c3e23f64938784c42ef9fca896829e3c19120/tests/qemu-iotests/175#L87
>
> I think we can adapt these checks and use them in every test checking
> for allocation. We have several tests
> that can use this.
>
> > storage/qemuimg_test.py:621: AssertionError
> >
> > ___
> > Devel mailing list -- devel@ovirt.org
> > To unsubscribe send an email to devel-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/G7PRFHNYMDKAMI7XZ2K7XGTHUBR4QWBZ/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SEKBUN2ZTBFRGC574FRBEXYHLQL2FW6V/


[ovirt-devel] CI: test_qcow2_to_raw_preallocated is flaky

2020-02-16 Thread Amit Bawer
Hi,

This occurs on CI every now and then,
taken from: 
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/18302//artifact/check-patch.tests-py3.el8.x86_64/mock_logs/script/stdout_stderr.log

Thanks



 TestConvertPreallocation.test_qcow2_to_raw_preallocated[full] _

self = 
preallocation = 'full'

@pytest.mark.parametrize("preallocation", [
qemuimg.PREALLOCATION.FALLOC,
qemuimg.PREALLOCATION.FULL,
])
def test_qcow2_to_raw_preallocated(self, preallocation):
virtual_size = 10 * MiB
with namedTemporaryDir() as tmpdir:
src = os.path.join(tmpdir, 'src')
dst = os.path.join(tmpdir, 'dst')

op = qemuimg.create(src, size=virtual_size, format="qcow2")
op.run()

op = qemuimg.convert(src, dst, srcFormat="qcow2", dstFormat="raw",
 preallocation=preallocation)
op.run()
>   check_raw_preallocated_image(dst, virtual_size)

storage/qemuimg_test.py:561:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

path = '/var/tmp/tmpxr0emprz/dst', virtual_size = 10485760

def check_raw_preallocated_image(path, virtual_size):
image_stat = os.stat(path)
assert image_stat.st_size == virtual_size
>   assert image_stat.st_blocks * 512 == virtual_size
E   assert (20488 * 512) == 10485760
E+  where 20488 = os.stat_result(st_mode=33188, st_ino=411528,
st_dev=2049, st_nlink=1, st_uid=0, st_gid=0, st_size=10485760,
st_atime=1581845207, st_mtime=1581845207,
st_ctime=1581845207).st_blocks

storage/qemuimg_test.py:621: AssertionError
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/G7PRFHNYMDKAMI7XZ2K7XGTHUBR4QWBZ/


[ovirt-devel] Re: glance.ovirt.org down?

2020-02-03 Thread Amit Bawer
On Mon, Feb 3, 2020 at 11:44 AM Miguel Duarte de Mora Barroso <
mdbarr...@redhat.com> wrote:

> Hi,
>
> I'm seeing the following error whenever I attempt to import an image via
> glance.
>
> 2020-02-03 09:24:02,275+ ERROR (tasks/0)
> [storage.TaskManager.Task] (Task='c6c
> 4140b-1ddf-4273-b170-a0dd1589832b') Unexpected error (task:875)
> Traceback (most recent call last):
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line
> 882, in _run
> return fn(*args, **kargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/task.py", line 336,
> in run
> return self.cmd(*self.argslist, **self.argsdict)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/securable.py",
> line 79, in w
> rapper
> return method(self, *args, **kwargs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/sp.py", line
> 1778, in downlo
> adImage
> return img.download(methodArgs, sdUUID, imgUUID, volUUID)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/image.py", line
> 1427, in dow
> nload
> vol.extend(imageSharing.getSize(methodArgs))
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/imageSharing.py",
> line 172,
> in getSize
> return getSizeImpl(methodArgs)
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/imageSharing.py",
> line 44, i
> n httpGetSize
> methodArgs.get("headers", {}))
>   File "/usr/lib/python2.7/site-packages/vdsm/storage/curlImgWrap.py",
> line 60, in
>  head
> raise CurlError(rc, out, err)
> CurlError: ecode=6, stdout=[], stderr=['curl: (6) Could not resolve
> host: glance.o
> virt.org; Unknown error'], message=None
>
> Not sure where I could report this.
>

Probably on +infra  list, seems there is an issue with
resolving "glance.ovirt.org":

# nslookup  glance.ovirt.org
Server: 10.35.255.14
Address: 10.35.255.14#53

Non-authoritative answer:
Name: glance.ovirt.org
Address: 8.43.85.218
Name: glance.ovirt.org
Address: 2620:52:3:1:5054:ff:fe32:c1d9

# ping glance.ovirt.org
PING glance.ovirt.org(glance.ovirt.org (2620:52:3:1:5054:ff:fe32:c1d9)) 56
data bytes
...

# ping 8.43.85.218
PING 8.43.85.218 (8.43.85.218) 56(84) bytes of data.
64 bytes from 8.43.85.218: icmp_seq=1 ttl=51 time=146 ms
64 bytes from 8.43.85.218: icmp_seq=2 ttl=51 time=146 ms
64 bytes from 8.43.85.218: icmp_seq=3 ttl=51 time=146 ms
64 bytes from 8.43.85.218: icmp_seq=4 ttl=51 time=146 ms
64 bytes from 8.43.85.218: icmp_seq=5 ttl=51 time=146 ms

Could only access it by IP, i.e. http://8.43.85.218:9292/


>
> Thanks in advance,
> Miguel
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CFRWUZJSC2J2EWAQGLUUEDOU7DCICFNX/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/R5THFEEDYJL7E562YOQWQ7E3HZMZL4TA/


[ovirt-devel] Re: RFC: Not Failing OST on Missing Artifact During Collection

2020-01-29 Thread Amit Bawer
On Wednesday, January 29, 2020, Anton Marchukov  wrote:

> Hello All.
>
> I faced it multiple times during local runs, e.g. first it was missing
> /tmp/otopi*. Now it fails for me on missing 
> /var/lib/pgsql/upgrade_rh-postgresql95-postgresql.log
> and /etc/dnf on centos7 engine. It seems like any missing artifact during
> collection step will fail the test.
>
> I understand some potential benefit of it, but I suggest we do not fail it
> if we are unable to find any of the artifacts during collection step.
> Better just to issue a warning in the logs and go on. If somebody needs to
> test for particular log presence, I suggest it should be included as a test
> step instead.
>
> Wdyt?


+1
A step further,
Could be very helpful if we could select the suites for OST during its
launch since many times we are only interested in a specific component.
For example: only run storage suite and don't care for ui browser suite
when checking a storage change. This way we can avoid OST being shut
completely when only one suite is broken which is not relevant to the
change being verified.


> --
> Anton Marchukov
> Associate Manager - RHV DevOps - Red Hat
>
>
>
>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/devel@ovirt.org/
> message/J3JZ337UJMBRBGUBPUIRA2T5NMNNPGE6/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/C2VWXDHCDU6EKJMRXDRVM4FDZELZHZ3Y/


[ovirt-devel] Re: Why is vdsm enabled by default?

2020-01-28 Thread Amit Bawer
On Tue, Jan 28, 2020 at 12:40 PM Yedidyah Bar David  wrote:

> On Tue, Jan 28, 2020 at 12:11 PM Amit Bawer  wrote:
>
>> From my limited experience, the usual flow for most users is
>> deploying/upgrading a host and installing vdsm from the engine UI on the
>> hypervisor machine.
>>
>
> You are right, for non-hosted-engine hosts. For hosted-engine, at least
> the first host, you first install stuff on it (including vdsm), then
> deploy, and only then have an engine. If for any reason you reboot in the
> middle, you might run into unneeded problems, due to vdsm starting at boot.
>
>
>> In case of manual installations by non-users, it is accustomed to run
>> "vdsm-tool configure --force" after step 3 and then reboot.
>>
>
> I didn't know that, sorry, but would not want to do that either, for
> hosted-engine. I'd rather hosted-engine deploy to do that, at the right
> point. Which it does :-)
>
>
>> Having a host on which vdsm is not running by default renders it useless
>> for ovirt, unless it is explicitly set to be down from UI under particular
>> circumstances.
>>
>
> Obviously, for an active host. If it's not active, and is rebooted, not
> sure we need vdsm to start - even if it's already added/configured/etc (but
> e.g. put in maintenance). But that's not my question - I don't mind
> enabling vdsmd as part of host-deploy, so that vdsm would start if a host
> in maintenance is rebooted. I only ask why it should be enabled by the rpm
> installation.
>

Hard to tell, this dates back to commit
d45e6827f38d36730ec468d31d905f21878c7250 and commit
c01a733ce81edc2c51ed3426f1424c93917bb106 before that, in which both did not
specify a reason.
But the rpm post installation should also configure vdsm, at least on a
fresh install [1], so it makes sense (at least to me) that it is okay to
enable it by default since you have all setup for a regular usage.

[1]
https://github.com/oVirt/vdsm/blob/b0c338b717ff300575c1ff690d9efa256fcd2164/vdsm.spec.in#L955


>
> Thanks!
>
>
>>
>> On Tue, Jan 28, 2020 at 11:47 AM Yedidyah Bar David 
>> wrote:
>>
>>> If I do e.g.:
>>>
>>> 1. Install CentOS
>>> 2. yum install ovirt-releaseSOMETHING
>>> 3. yum install vdsm
>>>
>>> Then reboot the machine, vdsm starts, and for this, it does all kinds of
>>> things to the system (such as configure various services using vdsm-tool
>>> etc.). Are we sure we want/need this? Why would we want vdsm
>>> configured/running at all at this stage, before being added to an engine?
>>>
>>> In particular, if (especially during development) we have a bug in this
>>> configuration process, and then fix it, it might not be enough to upgrade
>>> vdsm - the tooling will then also have to fix the changes done by the buggy
>>> previous version, or require a full machine reinstall.
>>>
>>> Thanks and best regards,
>>> --
>>> Didi
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3YHWLO3DFU2PLPGL44DBIBG25QYGOQL7/
>>>
>>
>
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/TLFRXMMPRJQPJQJJWFEOJ3CUOUSTOI23/


[ovirt-devel] Re: Why is vdsm enabled by default?

2020-01-28 Thread Amit Bawer
>From my limited experience, the usual flow for most users is
deploying/upgrading a host and installing vdsm from the engine UI on the
hypervisor machine.
In case of manual installations by non-users, it is accustomed to run
"vdsm-tool configure --force" after step 3 and then reboot.
Having a host on which vdsm is not running by default renders it useless
for ovirt, unless it is explicitly set to be down from UI under particular
circumstances.

On Tue, Jan 28, 2020 at 11:47 AM Yedidyah Bar David  wrote:

> If I do e.g.:
>
> 1. Install CentOS
> 2. yum install ovirt-releaseSOMETHING
> 3. yum install vdsm
>
> Then reboot the machine, vdsm starts, and for this, it does all kinds of
> things to the system (such as configure various services using vdsm-tool
> etc.). Are we sure we want/need this? Why would we want vdsm
> configured/running at all at this stage, before being added to an engine?
>
> In particular, if (especially during development) we have a bug in this
> configuration process, and then fix it, it might not be enough to upgrade
> vdsm - the tooling will then also have to fix the changes done by the buggy
> previous version, or require a full machine reinstall.
>
> Thanks and best regards,
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3YHWLO3DFU2PLPGL44DBIBG25QYGOQL7/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O7AOVUUFVYY2IWKXIRNSR33PKNR74GNU/


[ovirt-devel] Re: Download Disk via SDK

2020-01-22 Thread Amit Bawer
Did you mean this?
https://github.com/oVirt/ovirt-engine-sdk/blob/master/sdk/examples/download_disk.py

On Wed, Jan 22, 2020 at 4:54 PM  wrote:

> The diskService in the SDK has access to Move, Export, Remove, etc.
> However, I dont see a way to send a download request on the Disk. Does this
> functionality exist?
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/EMWCFWN4FBD2N6LY7FX4KLVMGSPFC4NX/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VE62ZES53SEW7UVHO3NFVTPAUAA635WM/


[ovirt-devel] Re: CI: Multiple jobs hang on "Archiving Artifacts"

2020-01-19 Thread Amit Bawer
On Sun, Jan 19, 2020 at 2:02 PM Barak Korren  wrote:

> I see a lot of jobs there, so jenkins might be a little loaded because all
> artifacts must be uploaded bact to it.
>

Thanks for checking.


>
> I'm seeing some actual test failures there, for example:
>
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/17343/artifact/check-patch.tests-py3.fc30.x86_64/mock_logs/script/stdout_stderr.log
>

This in specific seems related to an in-patch change which is not merged
yet.


>
> On Sun, 19 Jan 2020 at 13:55, Amit Bawer  wrote:
>
>> For the last 40 minutes, multiple CI jobs are hanging on "Archiving
>> Artifacts" stage:
>>
>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/
>>
>> could be a false alarm, but none seem to progress.
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FRYTRCG4D2HQM6IBUWWCYBQJQAGOE4OK/
>>
>
>
> --
> Barak Korren
> RHV DevOps team , RHCE, RHCi
> Red Hat EMEA
> redhat.com | TRIED. TESTED. TRUSTED. | redhat.com/trusted
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/MDALQSGYYSGZIVPXRGKTUHIRFTDCVXNX/


[ovirt-devel] CI: Multiple jobs hang on "Archiving Artifacts"

2020-01-19 Thread Amit Bawer
For the last 40 minutes, multiple CI jobs are hanging on "Archiving
Artifacts" stage:

https://jenkins.ovirt.org/job/vdsm_standard-check-patch/

could be a false alarm, but none seem to progress.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FRYTRCG4D2HQM6IBUWWCYBQJQAGOE4OK/


[ovirt-devel] Re: OST is failing - Last successful run was Dec-13-2019

2020-01-06 Thread Amit Bawer
AF

On Mon, Jan 6, 2020 at 11:48 AM Martin Perina  wrote:

>
>
> On Sun, Jan 5, 2020 at 10:08 AM Amit Bawer  wrote:
>
>> Seems we have NFS permissions issue for el8 vdsm in some of the runs.
>>
>> Example from
>> https://jenkins.ovirt.org/view/Amit/job/ovirt-system-tests_manual/6302/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/vdsm/vdsm.log
>> :
>>
>>
>> 2020-01-03 12:07:34,169-0500 INFO  (MainThread) [vds] (PID: 1264) I am
>> the actual vdsm 4.40.0.1458.git1fca84350 lago-basic-suite-master-host-1
>> (4.18.0-80.11.2.el8_0.x86_64) (vdsmd:152)...
>> 2020-01-03 12:50:29,662-0500 ERROR (check/loop) [storage.Monitor] Error
>> checking path 
>> /rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata
>> (monitor:501)
>> Traceback (most recent call last):
>>   File "/usr/lib/python3.6/site-packages/vdsm/storage/monitor.py", line
>> 499, in _pathChecked
>> delay = result.delay()
>>   File "/usr/lib/python3.6/site-packages/vdsm/storage/check.py", line
>> 391, in delay
>> raise exception.MiscFileReadException(self.path, self.rc, self.err)
>> vdsm.storage.exception.MiscFileReadException: Internal file read failure:
>> ('/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata',
>> 1, bytearray(b"/usr/bin/dd: failed to open
>> \'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata\':
>> Operation not permitted\n"))
>> 2020-01-03 12:50:30,112-0500 DEBUG (jsonrpc/7) [jsonrpc.JsonRpcServer]
>> Calling 'StoragePool.disconnect' in bridge with {'storagepoolID':
>> 'c90b137f-6e1f-4b9a-9612-da58910a2439', 'hostID': 2, 'scsiKey':
>> 'c90b137f-6e1f-4b9a-9612-da58910a2439'} (__init__:329)
>> 2020-01-03 12:50:30,114-0500 INFO  (jsonrpc/7) [vdsm.api] START
>> disconnectStoragePool(spUUID='c90b137f-6e1f-4b9a-9612-da58910a2439',
>> hostID=2, remove=False, options=None) from=:::192.168.201.4,38786,
>> flow_id=8d05a1, task_id=95573498-d1c7-41ad-ad33-28f2192b2b60 (api:48)
>>
>>
>> Probably need to set NFS server export options as in
>> https://bugzilla.redhat.com/show_bug.cgi?id=1776843#c7
>>
>
> Here is fix for NFS server on EL8: https://gerrit.ovirt.org/106120
>
> Should this be changes also for NFS server on EL7?
>

AFAICT the need to change the NFS options mostly arises from changes in
vdsm dependencies for el8, such as libvirt, requiring a different access
for NFS shares than before.
So if we are testing el8 hosts with el7 NFS server that might be relvent
for el7 NFS server as well.


> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WLLVWPZU6HFDWVWDIIJAS6OMBG4HRWF5/
>>
>
>
> --
> Martin Perina
> Manager, Software Engineering
> Red Hat Czech s.r.o.
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DCZ7RJKEL54KSLFVNGSZ6J7F3EIHYBPV/


[ovirt-devel] OST is failing - Last successful run was Dec-13-2019

2020-01-05 Thread Amit Bawer
Seems we have NFS permissions issue for el8 vdsm in some of the runs.

Example from
https://jenkins.ovirt.org/view/Amit/job/ovirt-system-tests_manual/6302/artifact/exported-artifacts/test_logs/basic-suite-master/post-004_basic_sanity.py/lago-basic-suite-master-host-1/_var_log/vdsm/vdsm.log
:


2020-01-03 12:07:34,169-0500 INFO  (MainThread) [vds] (PID: 1264) I am the
actual vdsm 4.40.0.1458.git1fca84350 lago-basic-suite-master-host-1
(4.18.0-80.11.2.el8_0.x86_64) (vdsmd:152)...
2020-01-03 12:50:29,662-0500 ERROR (check/loop) [storage.Monitor] Error
checking path 
/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata
(monitor:501)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/storage/monitor.py", line
499, in _pathChecked
delay = result.delay()
  File "/usr/lib/python3.6/site-packages/vdsm/storage/check.py", line 391,
in delay
raise exception.MiscFileReadException(self.path, self.rc, self.err)
vdsm.storage.exception.MiscFileReadException: Internal file read failure:
('/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata',
1, bytearray(b"/usr/bin/dd: failed to open
\'/rhev/data-center/mnt/192.168.200.4:_exports_nfs_exported/b92b26cf-fac4-4ccf-ba31-f6fb4184e302/dom_md/metadata\':
Operation not permitted\n"))
2020-01-03 12:50:30,112-0500 DEBUG (jsonrpc/7) [jsonrpc.JsonRpcServer]
Calling 'StoragePool.disconnect' in bridge with {'storagepoolID':
'c90b137f-6e1f-4b9a-9612-da58910a2439', 'hostID': 2, 'scsiKey':
'c90b137f-6e1f-4b9a-9612-da58910a2439'} (__init__:329)
2020-01-03 12:50:30,114-0500 INFO  (jsonrpc/7) [vdsm.api] START
disconnectStoragePool(spUUID='c90b137f-6e1f-4b9a-9612-da58910a2439',
hostID=2, remove=False, options=None) from=:::192.168.201.4,38786,
flow_id=8d05a1, task_id=95573498-d1c7-41ad-ad33-28f2192b2b60 (api:48)


Probably need to set NFS server export options as in
https://bugzilla.redhat.com/show_bug.cgi?id=1776843#c7
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WLLVWPZU6HFDWVWDIIJAS6OMBG4HRWF5/


[ovirt-devel] OST Fails for missing glusterfs mirrors at host-deploy

2020-01-01 Thread Amit Bawer
Snippet From:
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/6293/console

23:31:25 + cd
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master
23:31:25 + lago ovirt deploy
23:31:26 @ Deploy oVirt environment:
23:31:26   # Deploy environment:
23:31:26 * [Thread-2] Deploy VM lago-basic-suite-master-host-0:
23:31:26 * [Thread-3] Deploy VM lago-basic-suite-master-host-1:
23:31:26 * [Thread-4] Deploy VM lago-basic-suite-master-engine:
23:32:15 * [Thread-3] Deploy VM lago-basic-suite-master-host-1: Success
(in 0:00:49)
23:32:39 STDERR
23:32:39 + yum -y install ovirt-host
23:32:39 Error: Error downloading packages:
23:32:39   Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors were
tried
23:32:39
23:32:39   - STDERR
23:32:39 + yum -y install ovirt-host
23:32:39 Error: Error downloading packages:
23:32:39   Cannot download glusterfs-6.6-1.el8.x86_64.rpm: All mirrors were
tried
23:32:39
23:32:39 * [Thread-2] Deploy VM lago-basic-suite-master-host-0: ERROR
(in 0:01:13)
23:38:05 * [Thread-4] Deploy VM lago-basic-suite-master-engine: ERROR
(in 0:06:39)
23:38:05   # Deploy environment: ERROR (in 0:06:39)
23:38:06 @ Deploy oVirt environment: ERROR (in 0:06:39)
23:38:06 Error occured, aborting
23:38:06 Traceback (most recent call last):
23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
383, in do_run
23:38:06 self.cli_plugins[args.ovirtverb].do_run(args)
23:38:06   File "/usr/lib/python2.7/site-packages/lago/plugins/cli.py",
line 184, in do_run
23:38:06 self._do_run(**vars(args))
23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 573,
in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 584,
in wrapper
23:38:06 return func(*args, prefix=prefix, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/cmd.py", line
181, in do_deploy
23:38:06 prefix.deploy()
23:38:06   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
636, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/reposetup.py",
line 127, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/ovirtlago/prefix.py",
line 284, in deploy
23:38:06 return super(OvirtPrefix, self).deploy()
23:38:06   File "/usr/lib/python2.7/site-packages/lago/sdk_utils.py", line
50, in wrapped
23:38:06 return func(*args, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/lago/log_utils.py", line
636, in wrapper
23:38:06 return func(*args, **kwargs)
23:38:06   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
1671, in deploy
23:38:06 self.virt_env.get_vms().values()
23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 104,
in invoke_in_parallel
23:38:06 return vt.join_all()
23:38:06   File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58,
in _ret_via_queue
23:38:06 queue.put({'return': func()})
23:38:06   File "/usr/lib/python2.7/site-packages/lago/prefix.py", line
1662, in _deploy_host
23:38:06 host.name(),
23:38:06 LagoDeployError:
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/default/scripts/_home_jenkins_agent_workspace_ovirt-system-tests_manual_ovirt-system-tests_basic-suite-master_deploy-scripts_setup_1st_host_el7.sh
failed with status 1 on lago-basic-suite-master-host-0
23:38:06 + res=1
23:38:06 + cd -
23:38:06
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests
23:38:06 + return 1
23:38:06 + env_collect
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:06 + local
tests_out_dir=/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:06 + [[ -e
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
]]
23:38:06 + mkdir -p
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master
23:38:06 + cd
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/deployment-basic-suite-master/current
23:38:06 + lago collect --output
/home/jenkins/agent/workspace/ovirt-system-tests_manual/ovirt-system-tests/test_logs/basic-suite-master/post-000_deploy
23:38:08 @ Collect artifacts:
23:38:08   # [Thread-1] lago-basic-suite-master-host-0:
23:38:08   # [Thread-2] lago-basic-suite-master-host-1:
23:38:08   # [Thread-3] lago-basic-suite-master-engine:
23:38:10   # [Thread-1] lago-basic-suite-master-host-0: Success (in 0:00:02)
23:38:10   # [Thread-2] lago-basic-suite-master-host-1: Success (in 0:00:02)
23:38:16   # [Thread-3] lago-basic-suite-master-engine: Success (in 0:00:07)
23:38:16 @ Collect artifacts: Success (in 0:00:07)
23:38:16 + 

[ovirt-devel] Re: [rhev-devel] Re: New dependency for development environment

2019-12-16 Thread Amit Bawer
what do you see for your host on engine.log ? any SSH timeout issue?

On Mon, Dec 16, 2019 at 4:18 PM Kaustav Majumder 
wrote:

>
>
> On Mon, Dec 16, 2019 at 7:45 PM Amit Bawer  wrote:
>
>> from snippet is seems you are under
>> ~/work/ovirt-engine-builds/11-12-global-options
>>
> This is my $PREFIX .
> ⌂120% [kmajumde:~/work/ovirt-engine-builds/11-12-global-options] $ ls
> var/log/ovirt-engine/
> ansible  boot.log  cinderlib  dump  engine.log  host-deploy  notifier  ova
>  server.log  setup  ui.log
>
>> trying to ls var in relative path to your working directory
>> is it possible you have omitted the leading backslash?
>>
>> ls */*var/log/ovirt-engine/host-deploy/
>>
>>
>> On Mon, Dec 16, 2019 at 4:11 PM Kaustav Majumder 
>> wrote:
>>
>>> No logs whatsoever.
>>> ```
>>> ls var/log/ovirt-engine/host-deploy/
>>> ⌂142% [kmajumde:~/work/ovirt-engine-builds/11-12-global-options] $
>>> ```
>>>
>>>
>>> On Mon, Dec 16, 2019 at 7:39 PM Amit Bawer  wrote:
>>>
>>>>
>>>>
>>>> On Mon, Dec 16, 2019 at 2:46 PM Kaustav Majumder 
>>>> wrote:
>>>>
>>>>> Hi,
>>>>> i have tried setting up my devel env on an updated fedora 30. Engine
>>>>> is running well but when I am trying to add a new host to the engine
>>>>> (Centos 7.7) it is taking 2+ hrs and has still not added the host. Is this
>>>>> expected behaviour? Also I can't find any host deploy or ansible logs.
>>>>>
>>>>
>>>> Aren't any new logs under engine machine path at
>>>> /var/log/ovirt-engine/host-deploy/ ?
>>>>
>>>>
>>>>> On Thu, Dec 12, 2019 at 2:11 AM Martin Perina 
>>>>> wrote:
>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, Dec 11, 2019 at 4:29 PM Dominik Holler 
>>>>>> wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Wed, Nov 27, 2019 at 8:37 AM Ondra Machacek 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hello,
>>>>>>>>
>>>>>>>> we are going to merge a series of patches to master branch, which
>>>>>>>> integrates ansible-runner with oVirt engine. When the patches will
>>>>>>>> be
>>>>>>>> merged you will need to install new package called ansible-runner-
>>>>>>>> service-dev, and follow instructions so your dev-env will keep
>>>>>>>> working
>>>>>>>> smoothly(all relevant info will be also in README.adoc):
>>>>>>>>
>>>>>>>> 1) sudo dnf update ovirt-release-master
>>>>>>>>
>>>>>>>> 2) sudo dnf install -y ansible-runner-service-dev
>>>>>>>>
>>>>>>>>
>>>>>>> "dnf install -y ansible-runner-service-dev" did not work for me on
>>>>>>> fedora 29.
>>>>>>>
>>>>>>
>>>>>> You need to have at least FC30, because ansible-runner on FC29 is too
>>>>>> old
>>>>>>
>>>>>>> I created manually the file /etc/yum.repos.d/centos.repo:
>>>>>>> [centos-ovirt44-testing]
>>>>>>> name=CentOS-7 - oVirt 4.4
>>>>>>> baseurl=
>>>>>>> http://cbs.centos.org/repos/virt7-ovirt-44-testing/$basearch/os/
>>>>>>> gpgcheck=0
>>>>>>> enabled=1
>>>>>>>
>>>>>>> which made the ansible-runner-service-dev available.
>>>>>>>
>>>>>>>
>>>>>>>> 3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:
>>>>>>>>
>>>>>>>>---
>>>>>>>>playbooks_root_dir:
>>>>>>>> '$PREFIX/share/ovirt-engine/ansible-runner-service-project'
>>>>>>>>ssh_private_key:
>>>>>>>> '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
>>>>>>>>port: 50001
>>>>>>>>target_user: root
>>>>>>>>
>>>>>>>> Where `$PREFIX` is the prefix of your development environment
>>>>>>>> prefix,
>>>>&

[ovirt-devel] Re: [rhev-devel] Re: New dependency for development environment

2019-12-16 Thread Amit Bawer
from snippet is seems you are under
~/work/ovirt-engine-builds/11-12-global-options trying to ls var in
relative path to your working directory
is it possible you have omitted the leading backslash?

ls */*var/log/ovirt-engine/host-deploy/


On Mon, Dec 16, 2019 at 4:11 PM Kaustav Majumder 
wrote:

> No logs whatsoever.
> ```
> ls var/log/ovirt-engine/host-deploy/
> ⌂142% [kmajumde:~/work/ovirt-engine-builds/11-12-global-options] $
> ```
>
>
> On Mon, Dec 16, 2019 at 7:39 PM Amit Bawer  wrote:
>
>>
>>
>> On Mon, Dec 16, 2019 at 2:46 PM Kaustav Majumder 
>> wrote:
>>
>>> Hi,
>>> i have tried setting up my devel env on an updated fedora 30. Engine is
>>> running well but when I am trying to add a new host to the engine (Centos
>>> 7.7) it is taking 2+ hrs and has still not added the host. Is this expected
>>> behaviour? Also I can't find any host deploy or ansible logs.
>>>
>>
>> Aren't any new logs under engine machine path at
>> /var/log/ovirt-engine/host-deploy/ ?
>>
>>
>>> On Thu, Dec 12, 2019 at 2:11 AM Martin Perina 
>>> wrote:
>>>
>>>>
>>>>
>>>> On Wed, Dec 11, 2019 at 4:29 PM Dominik Holler 
>>>> wrote:
>>>>
>>>>>
>>>>>
>>>>> On Wed, Nov 27, 2019 at 8:37 AM Ondra Machacek 
>>>>> wrote:
>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> we are going to merge a series of patches to master branch, which
>>>>>> integrates ansible-runner with oVirt engine. When the patches will be
>>>>>> merged you will need to install new package called ansible-runner-
>>>>>> service-dev, and follow instructions so your dev-env will keep working
>>>>>> smoothly(all relevant info will be also in README.adoc):
>>>>>>
>>>>>> 1) sudo dnf update ovirt-release-master
>>>>>>
>>>>>> 2) sudo dnf install -y ansible-runner-service-dev
>>>>>>
>>>>>>
>>>>> "dnf install -y ansible-runner-service-dev" did not work for me on
>>>>> fedora 29.
>>>>>
>>>>
>>>> You need to have at least FC30, because ansible-runner on FC29 is too
>>>> old
>>>>
>>>>> I created manually the file /etc/yum.repos.d/centos.repo:
>>>>> [centos-ovirt44-testing]
>>>>> name=CentOS-7 - oVirt 4.4
>>>>> baseurl=
>>>>> http://cbs.centos.org/repos/virt7-ovirt-44-testing/$basearch/os/
>>>>> gpgcheck=0
>>>>> enabled=1
>>>>>
>>>>> which made the ansible-runner-service-dev available.
>>>>>
>>>>>
>>>>>> 3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:
>>>>>>
>>>>>>---
>>>>>>playbooks_root_dir:
>>>>>> '$PREFIX/share/ovirt-engine/ansible-runner-service-project'
>>>>>>ssh_private_key: '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
>>>>>>port: 50001
>>>>>>target_user: root
>>>>>>
>>>>>> Where `$PREFIX` is the prefix of your development environment prefix,
>>>>>> which you've specified during the compilation of the engine.
>>>>>>
>>>>>> 4) Restart and enable ansible-runner-service:
>>>>>>
>>>>>># systemctl restart ansible-runner-service
>>>>>># systemctl enable ansible-runner-service
>>>>>>
>>>>>> That's it, your dev-env should start using the ansible-runner-service
>>>>>> for host-deployment etc.
>>>>>>
>>>>>> Please note that only Fedora 30/31 and Centos7 was packaged, and are
>>>>>> natively supported!
>>>>>>
>>>>>> Thanks,
>>>>>> Ondra
>>>>>> ___
>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/devel@ovir

[ovirt-devel] Re: [rhev-devel] Re: New dependency for development environment

2019-12-16 Thread Amit Bawer
On Mon, Dec 16, 2019 at 2:46 PM Kaustav Majumder 
wrote:

> Hi,
> i have tried setting up my devel env on an updated fedora 30. Engine is
> running well but when I am trying to add a new host to the engine (Centos
> 7.7) it is taking 2+ hrs and has still not added the host. Is this expected
> behaviour? Also I can't find any host deploy or ansible logs.
>

Aren't any new logs under engine machine path at
/var/log/ovirt-engine/host-deploy/ ?


> On Thu, Dec 12, 2019 at 2:11 AM Martin Perina  wrote:
>
>>
>>
>> On Wed, Dec 11, 2019 at 4:29 PM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>> On Wed, Nov 27, 2019 at 8:37 AM Ondra Machacek 
>>> wrote:
>>>
 Hello,

 we are going to merge a series of patches to master branch, which
 integrates ansible-runner with oVirt engine. When the patches will be
 merged you will need to install new package called ansible-runner-
 service-dev, and follow instructions so your dev-env will keep working
 smoothly(all relevant info will be also in README.adoc):

 1) sudo dnf update ovirt-release-master

 2) sudo dnf install -y ansible-runner-service-dev


>>> "dnf install -y ansible-runner-service-dev" did not work for me on
>>> fedora 29.
>>>
>>
>> You need to have at least FC30, because ansible-runner on FC29 is too old
>>
>>> I created manually the file /etc/yum.repos.d/centos.repo:
>>> [centos-ovirt44-testing]
>>> name=CentOS-7 - oVirt 4.4
>>> baseurl=http://cbs.centos.org/repos/virt7-ovirt-44-testing/$basearch/os/
>>> gpgcheck=0
>>> enabled=1
>>>
>>> which made the ansible-runner-service-dev available.
>>>
>>>
 3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:

---
playbooks_root_dir:
 '$PREFIX/share/ovirt-engine/ansible-runner-service-project'
ssh_private_key: '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
port: 50001
target_user: root

 Where `$PREFIX` is the prefix of your development environment prefix,
 which you've specified during the compilation of the engine.

 4) Restart and enable ansible-runner-service:

# systemctl restart ansible-runner-service
# systemctl enable ansible-runner-service

 That's it, your dev-env should start using the ansible-runner-service
 for host-deployment etc.

 Please note that only Fedora 30/31 and Centos7 was packaged, and are
 natively supported!

 Thanks,
 Ondra
 ___
 Devel mailing list -- devel@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AFKGTV4WDNONLND63RR6YMSMV4FJQM4L/

>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RMNEQG7KNFWSQX4REN3JN34ED4KTGYRH/
>>>
>>
>>
>> --
>> Martin Perina
>> Manager, Software Engineering
>> Red Hat Czech s.r.o.
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TMTVOWQLF7GQS2ZXFEZ6TFID6LI26UO/
>>
>
>
> --
>
> Thanks,
>
> Kaustav Majumder
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FDMHKWZ2O2W7B3UKT5RWM57CAHIDIOSJ/


[ovirt-devel] Re: [rhev-devel] Re: New dependency for development environment

2019-12-16 Thread Amit Bawer
Hi Kaustav,

Have you tried to run the following on your host before being deployed from
engine?

 # dnf install http://resources.ovirt.org/pub/yum-repo/ovirt-release-master.rpm

 # dnf update


On Mon, Dec 16, 2019 at 2:46 PM Kaustav Majumder 
wrote:

> Hi,
> i have tried setting up my devel env on an updated fedora 30. Engine is
> running well but when I am trying to add a new host to the engine (Centos
> 7.7) it is taking 2+ hrs and has still not added the host. Is this expected
> behaviour? Also I can't find any host deploy or ansible logs.
>
> On Thu, Dec 12, 2019 at 2:11 AM Martin Perina  wrote:
>
>>
>>
>> On Wed, Dec 11, 2019 at 4:29 PM Dominik Holler 
>> wrote:
>>
>>>
>>>
>>> On Wed, Nov 27, 2019 at 8:37 AM Ondra Machacek 
>>> wrote:
>>>
 Hello,

 we are going to merge a series of patches to master branch, which
 integrates ansible-runner with oVirt engine. When the patches will be
 merged you will need to install new package called ansible-runner-
 service-dev, and follow instructions so your dev-env will keep working
 smoothly(all relevant info will be also in README.adoc):

 1) sudo dnf update ovirt-release-master

 2) sudo dnf install -y ansible-runner-service-dev


>>> "dnf install -y ansible-runner-service-dev" did not work for me on
>>> fedora 29.
>>>
>>
>> You need to have at least FC30, because ansible-runner on FC29 is too old
>>
>>> I created manually the file /etc/yum.repos.d/centos.repo:
>>> [centos-ovirt44-testing]
>>> name=CentOS-7 - oVirt 4.4
>>> baseurl=http://cbs.centos.org/repos/virt7-ovirt-44-testing/$basearch/os/
>>> gpgcheck=0
>>> enabled=1
>>>
>>> which made the ansible-runner-service-dev available.
>>>
>>>
 3) Edit `/etc/ansible-runner-service/config.yaml` file as follows:

---
playbooks_root_dir:
 '$PREFIX/share/ovirt-engine/ansible-runner-service-project'
ssh_private_key: '$PREFIX/etc/pki/ovirt-engine/keys/engine_id_rsa'
port: 50001
target_user: root

 Where `$PREFIX` is the prefix of your development environment prefix,
 which you've specified during the compilation of the engine.

 4) Restart and enable ansible-runner-service:

# systemctl restart ansible-runner-service
# systemctl enable ansible-runner-service

 That's it, your dev-env should start using the ansible-runner-service
 for host-deployment etc.

 Please note that only Fedora 30/31 and Centos7 was packaged, and are
 natively supported!

 Thanks,
 Ondra
 ___
 Devel mailing list -- devel@ovirt.org
 To unsubscribe send an email to devel-le...@ovirt.org
 Privacy Statement: https://www.ovirt.org/site/privacy-policy/
 oVirt Code of Conduct:
 https://www.ovirt.org/community/about/community-guidelines/
 List Archives:
 https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AFKGTV4WDNONLND63RR6YMSMV4FJQM4L/

>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RMNEQG7KNFWSQX4REN3JN34ED4KTGYRH/
>>>
>>
>>
>> --
>> Martin Perina
>> Manager, Software Engineering
>> Red Hat Czech s.r.o.
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5TMTVOWQLF7GQS2ZXFEZ6TFID6LI26UO/
>>
>
>
> --
>
> Thanks,
>
> Kaustav Majumder
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PXQAMLFEMUZMAYFQ7CRKK47SYYH6UQGY/


[ovirt-devel] Re: CI: jsonrpcserver test fails

2019-12-11 Thread Amit Bawer
On Wed, Dec 11, 2019 at 5:25 PM Marcin Sobczyk  wrote:

> Hi,
>
> On 12/11/19 2:12 PM, Amit Bawer wrote:
>
>
>
> On Wed, Dec 11, 2019 at 2:56 PM Amit Bawer  wrote:
>
>> Hi devel,
>>
>> We have (also) frequent connectivity/timeout failures on CI tests for
>> jsonrpcserver tests.
>>
>> Example:
>>
>>
>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15642//artifact/check-patch.tests-py3.el8.x86_64/mock_logs/script/stdout_stderr.log
>>
>> Proposing the following patches:
>>
>> https://gerrit.ovirt.org/#/c/105518/
>> https://gerrit.ovirt.org/#/c/105519/
>>
> See also: https://gerrit.ovirt.org/#/c/105526/
>
>>
>> You are most welcome to review or suggest alternative ones.
>>
> The timeouts probably won't help - there are some races that won't be
> resolved by that.
> I suggest to report and simply skip the failing tests for now
> unfortunately.
>

Fine with me, will abandon. Thanks


> Marcin
>
>
>> Thanks.
>>
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: 
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives: 
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YNIIMS2YXUOUWMBVG4QWETBJZFK2YCK3/
>
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ISQ5LMBE5PEYIEJXODPTOZ6EYWNASTPK/


[ovirt-devel] Re: Vdsm/CI: Failed to synchronize cache for repo 'epel-el8'

2019-12-11 Thread Amit Bawer
On Wed, Dec 11, 2019 at 5:21 PM Marcin Sobczyk  wrote:

> Hi,
>
> On 12/11/19 3:40 PM, Amit Bawer wrote:
>
>
>
> On Wed, Dec 11, 2019 at 4:13 PM Amit Bawer  wrote:
>
>>
>>
>> On Wed, Dec 11, 2019 at 4:02 PM Ehud Yonasi  wrote:
>>
>>> Hi Amit,
>>> We mirror fc30 updates so that should not cause any problems.
>>> Maybe do you have under automation .repos file for fc30 containing
>>> fc30-updates repo?
>>>
>>
>> Seems that we do:
>>
>> [abawer@localhost automation]$ grep "fc30-update" $(ls -a | grep repo)
>> check-patch.install.repos.fc30:fc30-updates-debuginfo,
>> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
>> check-patch.linters.repos.fc30:fc30-updates-debuginfo,
>> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
>> check-patch.repos.fc30:fc30-updates-debuginfo,
>> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
>> check-patch.tests-py3.repos.fc30:fc30-updates-debuginfo,
>> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
>> [abawer@localhost automation]$
>>
>> Can we remove those entries? is it just for CI?
>>
> Thanks, all are symlinked to same repo file, added a patch removing
> fc30-updates-debuginfo from the list:
> https://gerrit.ovirt.org/#/c/105531/
>
> Unfortunately with the removal of 'debuginfo' repo for fc30 it seems we're
> not able to install debug info for python3 anymore [1]:
>
> [2019-12-11T14:50:49.151Z] + python3 tests/profile debuginfo-install 
> debuginfo-install -y python3
> [2019-12-11T14:51:22.728Z] Could not find debuginfo for package: 
> python36-3.6.8-2.module_el8.0.0+33+0a10c0e1.x86_64
> [2019-12-11T14:51:22.728Z] Could not find debuginfo for package: 
> python36-3.6.8-2.module_el8.0.0+33+0a10c0e1.x86_64
> [2019-12-11T14:51:22.728Z] No debuginfo packages available to install
>
> This error appears for also for other CI runs, not related to
https://gerrit.ovirt.org/#/c/105531/
you can see
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15704/consoleFull
(line 5214) for example.

But I think we still need to provide the fc30 debuginfo repo anyway, I will
try with the alternatives Ehud suggested next.


> It was working for a patch merged a couple of hours ago today [2]:
>
> [2019-12-11T10:04:29.733Z] + python3 tests/profile debuginfo-install 
> debuginfo-install -y python3
> [2019-12-11T10:04:30.787Z] Custom fc30-updates-debuginfo
> 13 kB/s | 4.9 kB 00:00
> [2019-12-11T10:04:30.788Z] Custom tested   
> 365 kB/s | 3.0 kB 00:00
> [2019-12-11T10:04:31.115Z] Custom vdo   
> 68 kB/s | 3.3 kB 00:00
> [2019-12-11T10:04:31.115Z] Custom virt-preview  
> 56 kB/s | 3.6 kB 00:00
> [2019-12-11T10:04:31.115Z] Custom nmstate   
> 49 kB/s | 3.3 kB 00:00
> [2019-12-11T10:04:31.115Z] Custom networkmanager
> 50 kB/s | 3.3 kB 00:00
> [2019-12-11T10:04:31.115Z] fedora  
> 700 kB/s | 4.2 kB 00:00
> [2019-12-11T10:04:32.477Z] updates 
> 603 kB/s | 4.2 kB 00:00
> [2019-12-11T10:04:43.421Z] Dependencies resolved.
> [2019-12-11T10:04:43.421Z] 
> 
> [2019-12-11T10:04:43.422Z]  Package   Arch Version  
> RepositorySize
> [2019-12-11T10:04:43.422Z] 
> 
> [2019-12-11T10:04:43.422Z] Installing:
> [2019-12-11T10:04:43.422Z]  python3-debuginfo x86_64   3.7.5-1.fc30 
> fc30-updates-debuginfo   9.6 M
> [2019-12-11T10:04:43.422Z]  python3-debugsource   x86_64   3.7.5-1.fc30 
> fc30-updates-debuginfo   3.0 M
> [2019-12-11T10:04:43.422Z]
> [2019-12-11T10:04:43.422Z] Transaction Summary
> [2019-12-11T10:04:43.422Z] 
> 
>
>
> Ehud, are we missing something here? Should we use different urls for
> these fc30 repositories?
>
>
> [1]
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/15706/nodes/141/log/?start=0
> [2]
> https://jenkins.ovirt.org/blue/rest/organizations/jenkins/pipelines/vdsm_standard-check-patch/runs/15616/nodes/143/log/?start=0
>
>
>
>>
>>> If so it might fail 

[ovirt-devel] Re: Vdsm/CI: TimeoutError in VdsmClientTests

2019-12-11 Thread Amit Bawer
Maybe this was tried before, but i've put a set of simple fixes here
regarding those issues:

https://gerrit.ovirt.org/#/q/status:open+project:vdsm+branch:master+topic:jsonrpcserver

On Mon, Dec 9, 2019 at 8:48 PM Milan Zamazal  wrote:

> Marcin Sobczyk  writes:
>
> > Hi,
> >
> > On 12/7/19 7:52 AM, Martin Perina wrote:
> >> Marcin, could you please investigate?
> >>
> > These failures are the already known race/timeout issues with our
> > yajsonrpc/stomp tests.
> > Since we switched to dynamic SSL key-cert generation in [2] they just
> > manifest in a different way.
> > I posted [1] to disable these tests for now.
>
> Thanks, Marcin, looks like a good step now to avoid the failures.
>
> > [1] https://gerrit.ovirt.org/105431
> > [2]
> >
> https://gerrit.ovirt.org/#/q/status:merged+project:vdsm+branch:master+topic:vdsm-tests-package-removal
> >
> >>
> >> On Fri, 6 Dec 2019, 19:50 Milan Zamazal,  >> > wrote:
> >>
> >> [Sorry, sent an unfinished mail by mistake.]
> >>
> >> Milan Zamazal mailto:mzama...@redhat.com>>
> >> writes:
> >>
> >> > Hi, this seems to be a frequent error in Jenkins run recently:
> >> >
> >> >
> >> > === FAILURES
> >> ===
> >> > __ VdsmClientTests.test_failing_call
> >> ___
> >> >
> >> > self =  >> testMethod=test_failing_call>
> >> >
> >> > def test_failing_call(self):
> >> > with self._create_client() as client:
> >> > with self.assertRaises(ServerError) as ex:
> >> >>   client.Test.failingCall()
> >> >
> >> > lib/yajsonrpc/stomprpcclient_test.py:144:
> >> > _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
> >> _ _ _ _ _ _ _ _
> >> >
> >> > def _call(self, namespace, method_name, **kwargs):
> >> > """
> >> > Client call method, executes a given command
> >> >
> >> > Args:
> >> > namespace (string): namespace name
> >> > method_name (string): method name
> >> > **kwargs: Arbitrary keyword arguments
> >> >
> >> > Returns:
> >> > method result
> >> >
> >> > Raises:
> >> > ClientError: in case of an error in the protocol.
> >> > TimeoutError: if there is no response after a pre
> >> configured time.
> >> > ServerError: in case of an error while executing the
> >> command
> >> > """
> >> > method = namespace + "." + method_name
> >> > timeout = kwargs.pop("_timeout", self._default_timeout)
> >> >
> >> > req = yajsonrpc.JsonRpcRequest(
> >> > method, kwargs, reqId=str(uuid.uuid4()))
> >> >
> >> > try:
> >> > responses = self._client.call(
> >> > req, timeout=timeout, flow_id=self._flow_id)
> >> > except EnvironmentError as e:
> >> > raise ClientError(method, kwargs, e)
> >> >
> >> > if not responses:
> >> >>   raise TimeoutError(method, kwargs, timeout)
> >> > E   vdsm.client.TimeoutError: Request Test.failingCall
> >> with args {} timed out after 3 seconds
> >> >
> >> > ../lib/vdsm/client.py:294: TimeoutError
> >> > -- Captured log call
> >> ---
> >> > ERRORvds.dispatcher:betterAsyncore.py:179 uncaptured python
> >> exception, closing channel  >> ('::1', 47428, 0, 0) at 0x7f48ddc47d10> ( >> 'ValueError'>:'b'ept-version:1.2'' contains illegal character ':'
> >> [/usr/lib64/python3.7/asyncore.py|readwrite|108]
> >> [/usr/lib64/python3.7/asyncore.py|handle_read_event|422]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|handle_read|71]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|_delegate_call|168]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/protocoldetector.py|handle_read|129]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|handle_socket|413]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/vdsm/rpc/bindingjsonrpc.py|add_socket|54]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|createListener|379]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/stompserver.py|StompListener|345]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|__init__|47]
> >>
>  
> [/home/jenkins/workspace/vdsm_standard-check-patch/vdsm/lib/yajsonrpc/betterAsyncore.py|switch_implementation|86]
> >>
>  
> [/home/jenkins/workspace/vdsm_standa

[ovirt-devel] Re: Vdsm/CI: Failed to synchronize cache for repo 'epel-el8'

2019-12-11 Thread Amit Bawer
On Wed, Dec 11, 2019 at 4:13 PM Amit Bawer  wrote:

>
>
> On Wed, Dec 11, 2019 at 4:02 PM Ehud Yonasi  wrote:
>
>> Hi Amit,
>> We mirror fc30 updates so that should not cause any problems.
>> Maybe do you have under automation .repos file for fc30 containing
>> fc30-updates repo?
>>
>
> Seems that we do:
>
> [abawer@localhost automation]$ grep "fc30-update" $(ls -a | grep repo)
> check-patch.install.repos.fc30:fc30-updates-debuginfo,
> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
> check-patch.linters.repos.fc30:fc30-updates-debuginfo,
> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
> check-patch.repos.fc30:fc30-updates-debuginfo,
> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
> check-patch.tests-py3.repos.fc30:fc30-updates-debuginfo,
> http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
> [abawer@localhost automation]$
>
> Can we remove those entries? is it just for CI?
>
Thanks, all are symlinked to same repo file, added a patch removing
fc30-updates-debuginfo from the list:
https://gerrit.ovirt.org/#/c/105531/


>
>> If so it might fail on that and not on our mirror.
>>
>> On Wed, Dec 11, 2019 at 3:29 PM Amit Bawer  wrote:
>>
>>> Hi
>>>
>>> Seems there is a similar issue with fc30 lately:
>>>
>>> + python3 tests/profile debuginfo-install debuginfo-install -y python3
>>> Custom fc30-updates-debuginfo41 kB/s | 4.9 kB
>>> 00:00
>>> Custom fc30-updates-debuginfo   1.2 kB/s | 676  B
>>> 00:00
>>> Error: Failed to download metadata for repo 'fc30-updates-debuginfo':
>>> Yum repo downloading error: Downloading error(s):
>>> repodata/5e7a2915066242a77ff4c4da8229c3e5bd3414d71ad3417e4ed73cdddc609404-primary.xml.zck
>>> - Cannot download, all mirrors were already tried without success;
>>> repodata/d6302eb9dd101ab5ea65f51b98440bc25d2ac49c87349211cd11f24c61d48686-filelists.xml.zck
>>> - Cannot download, all mirrors were already tried without success
>>> PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
>>> 62.590739013275986, "elapsed": 1.7404060363769531, "idrss": 0, "inblock":
>>> 0, "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 52740, "minflt": 8356,
>>> "msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 254,
>>> "nsignals": 0, "nswap": 0, "nvcsw": 27, "oublock": 128, "start":
>>> 1576067914.252874, "status": 1, "stime": 0.22821, "utime": 0.861123}
>>> + teardown
>>>
>>> Full log:
>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15663//artifact/check-patch.tests-py3.fc30.x86_64/mock_logs/script/stdout_stderr.log
>>>
>>> On Fri, Dec 6, 2019 at 8:34 PM Milan Zamazal 
>>> wrote:
>>>
>>>> Hi, I've seen this error more than once in Jenkins runs on Vdsm patches
>>>> posted to gerrit:
>>>>
>>>>   + python3 tests/profile debuginfo-install debuginfo-install -y python3
>>>>   Error: Failed to synchronize cache for repo 'epel-el8'
>>>>   PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
>>>> 25.8319700558958, "elapsed": 39.049031019210815, "idrss": 0, "inblock": 0,
>>>> "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 92900, "minflt": 35674,
>>>> "msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 1096,
>>>> "nsignals": 0, "nswap": 0, "nvcsw": 904, "oublock": 64848, "start":
>>>> 1575653575.116399, "status": 1, "stime": 1.135238, "utime": 8.951896}
>>>>   + teardown
>>>>   + res=1
>>>>   + '[' 1 -ne 0 ']'
>>>>   + echo '*** err: 1'
>>>>   *** err: 1
>>>>
>>>> See e.g.
>>>>
>>>> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/15459/pipeline/138
>>>>
>>>> Does anybody know what's going on and how to remedy it?
>>>>
>>>> Thanks,
>>>> Milan
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ADUCKMPDWVDWSF2RCUS3VLKTLHYZNA7T/
>>>>
>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/RHUIMM4SWFZMZPR4AAD6SBCTCG36Y4HX/


[ovirt-devel] Re: Vdsm/CI: Failed to synchronize cache for repo 'epel-el8'

2019-12-11 Thread Amit Bawer
On Wed, Dec 11, 2019 at 4:02 PM Ehud Yonasi  wrote:

> Hi Amit,
> We mirror fc30 updates so that should not cause any problems.
> Maybe do you have under automation .repos file for fc30 containing
> fc30-updates repo?
>

Seems that we do:

[abawer@localhost automation]$ grep "fc30-update" $(ls -a | grep repo)
check-patch.install.repos.fc30:fc30-updates-debuginfo,
http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
check-patch.linters.repos.fc30:fc30-updates-debuginfo,
http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
check-patch.repos.fc30:fc30-updates-debuginfo,
http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
check-patch.tests-py3.repos.fc30:fc30-updates-debuginfo,
http://download.fedoraproject.org/pub/fedora/linux/updates/$releasever/Everything/$basearch/debug/
[abawer@localhost automation]$

Can we remove those entries? is it just for CI?


> If so it might fail on that and not on our mirror.
>
> On Wed, Dec 11, 2019 at 3:29 PM Amit Bawer  wrote:
>
>> Hi
>>
>> Seems there is a similar issue with fc30 lately:
>>
>> + python3 tests/profile debuginfo-install debuginfo-install -y python3
>> Custom fc30-updates-debuginfo41 kB/s | 4.9 kB
>> 00:00
>> Custom fc30-updates-debuginfo   1.2 kB/s | 676  B
>> 00:00
>> Error: Failed to download metadata for repo 'fc30-updates-debuginfo': Yum
>> repo downloading error: Downloading error(s):
>> repodata/5e7a2915066242a77ff4c4da8229c3e5bd3414d71ad3417e4ed73cdddc609404-primary.xml.zck
>> - Cannot download, all mirrors were already tried without success;
>> repodata/d6302eb9dd101ab5ea65f51b98440bc25d2ac49c87349211cd11f24c61d48686-filelists.xml.zck
>> - Cannot download, all mirrors were already tried without success
>> PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
>> 62.590739013275986, "elapsed": 1.7404060363769531, "idrss": 0, "inblock":
>> 0, "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 52740, "minflt": 8356,
>> "msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 254,
>> "nsignals": 0, "nswap": 0, "nvcsw": 27, "oublock": 128, "start":
>> 1576067914.252874, "status": 1, "stime": 0.22821, "utime": 0.861123}
>> + teardown
>>
>> Full log:
>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15663//artifact/check-patch.tests-py3.fc30.x86_64/mock_logs/script/stdout_stderr.log
>>
>> On Fri, Dec 6, 2019 at 8:34 PM Milan Zamazal  wrote:
>>
>>> Hi, I've seen this error more than once in Jenkins runs on Vdsm patches
>>> posted to gerrit:
>>>
>>>   + python3 tests/profile debuginfo-install debuginfo-install -y python3
>>>   Error: Failed to synchronize cache for repo 'epel-el8'
>>>   PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
>>> 25.8319700558958, "elapsed": 39.049031019210815, "idrss": 0, "inblock": 0,
>>> "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 92900, "minflt": 35674,
>>> "msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 1096,
>>> "nsignals": 0, "nswap": 0, "nvcsw": 904, "oublock": 64848, "start":
>>> 1575653575.116399, "status": 1, "stime": 1.135238, "utime": 8.951896}
>>>   + teardown
>>>   + res=1
>>>   + '[' 1 -ne 0 ']'
>>>   + echo '*** err: 1'
>>>   *** err: 1
>>>
>>> See e.g.
>>>
>>> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/15459/pipeline/138
>>>
>>> Does anybody know what's going on and how to remedy it?
>>>
>>> Thanks,
>>> Milan
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ADUCKMPDWVDWSF2RCUS3VLKTLHYZNA7T/
>>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YUOE7AKHIWPNSE7XGY5WD5HMYSRHE6ZR/


[ovirt-devel] Re: Vdsm/CI: Failed to synchronize cache for repo 'epel-el8'

2019-12-11 Thread Amit Bawer
Hi

Seems there is a similar issue with fc30 lately:

+ python3 tests/profile debuginfo-install debuginfo-install -y python3
Custom fc30-updates-debuginfo41 kB/s | 4.9 kB 00:00

Custom fc30-updates-debuginfo   1.2 kB/s | 676  B 00:00

Error: Failed to download metadata for repo 'fc30-updates-debuginfo': Yum
repo downloading error: Downloading error(s):
repodata/5e7a2915066242a77ff4c4da8229c3e5bd3414d71ad3417e4ed73cdddc609404-primary.xml.zck
- Cannot download, all mirrors were already tried without success;
repodata/d6302eb9dd101ab5ea65f51b98440bc25d2ac49c87349211cd11f24c61d48686-filelists.xml.zck
- Cannot download, all mirrors were already tried without success
PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
62.590739013275986, "elapsed": 1.7404060363769531, "idrss": 0, "inblock":
0, "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 52740, "minflt": 8356,
"msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 254,
"nsignals": 0, "nswap": 0, "nvcsw": 27, "oublock": 128, "start":
1576067914.252874, "status": 1, "stime": 0.22821, "utime": 0.861123}
+ teardown

Full log:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15663//artifact/check-patch.tests-py3.fc30.x86_64/mock_logs/script/stdout_stderr.log

On Fri, Dec 6, 2019 at 8:34 PM Milan Zamazal  wrote:

> Hi, I've seen this error more than once in Jenkins runs on Vdsm patches
> posted to gerrit:
>
>   + python3 tests/profile debuginfo-install debuginfo-install -y python3
>   Error: Failed to synchronize cache for repo 'epel-el8'
>   PROFILE {"command": ["debuginfo-install", "-y", "python3"], "cpu":
> 25.8319700558958, "elapsed": 39.049031019210815, "idrss": 0, "inblock": 0,
> "isrss": 0, "ixrss": 0, "majflt": 0, "maxrss": 92900, "minflt": 35674,
> "msgrcv": 0, "msgsnd": 0, "name": "debuginfo-install", "nivcsw": 1096,
> "nsignals": 0, "nswap": 0, "nvcsw": 904, "oublock": 64848, "start":
> 1575653575.116399, "status": 1, "stime": 1.135238, "utime": 8.951896}
>   + teardown
>   + res=1
>   + '[' 1 -ne 0 ']'
>   + echo '*** err: 1'
>   *** err: 1
>
> See e.g.
>
> https://jenkins.ovirt.org/blue/organizations/jenkins/vdsm_standard-check-patch/detail/vdsm_standard-check-patch/15459/pipeline/138
>
> Does anybody know what's going on and how to remedy it?
>
> Thanks,
> Milan
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ADUCKMPDWVDWSF2RCUS3VLKTLHYZNA7T/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3U3UQNBV6A6JBI3L6MDFVNGOD2577KLI/


[ovirt-devel] Re: CI: jsonrpcserver test fails

2019-12-11 Thread Amit Bawer
On Wed, Dec 11, 2019 at 2:56 PM Amit Bawer  wrote:

> Hi devel,
>
> We have (also) frequent connectivity/timeout failures on CI tests for
> jsonrpcserver tests.
>
> Example:
>
>
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15642//artifact/check-patch.tests-py3.el8.x86_64/mock_logs/script/stdout_stderr.log
>
> Proposing the following patches:
>
> https://gerrit.ovirt.org/#/c/105518/
> https://gerrit.ovirt.org/#/c/105519/
>
See also: https://gerrit.ovirt.org/#/c/105526/

>
> You are most welcome to review or suggest alternative ones.
>
> Thanks.
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/YNIIMS2YXUOUWMBVG4QWETBJZFK2YCK3/


[ovirt-devel] CI: jsonrpcserver test fails

2019-12-11 Thread Amit Bawer
Hi devel,

We have (also) frequent connectivity/timeout failures on CI tests for
jsonrpcserver tests.

Example:

https://jenkins.ovirt.org/job/vdsm_standard-check-patch/15642//artifact/check-patch.tests-py3.el8.x86_64/mock_logs/script/stdout_stderr.log

Proposing the following patches:

https://gerrit.ovirt.org/#/c/105518/
https://gerrit.ovirt.org/#/c/105519/

You are most welcome to review or suggest alternative ones.

Thanks.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PDJHLTQJ7ZHRGBRXCOZRBDGJL7EVJ3V5/


[ovirt-devel] Re: Proposing Vojtech Juranek as VDSM Storage maintainer

2019-11-26 Thread Amit Bawer
+1 (or +2 at least).

On Tue, Nov 26, 2019 at 4:57 PM Tal Nisan  wrote:

> Hi everyone,
> Vojtech joined the Storage team 15 months ago and was quickly thrown by us
> into the deep and stormy waters of VDSM
> This past year and a bit Vojtech managed to dive quickly into VDSM and
> played a key part in the 4K blocks feature, Python3 transformation, RHEL8
> host support and numerous tests addition, code review and refactors.
> Vojtech has managed to have more than 200 VDSM patches merged this year
> and if you're a VDSM developer you probably know this is nothing trivial :)
> Given the deep knowledge Vojtech acquired and the responsibility he showed
> in the reviews, tests and verification I'd like to nominate Vojtech as a
> VDSM Storage maintainer.
>
> Your thoughts please.
> Tal.
>
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5YPS5AQJKZQMDEJ5KRWKZET345U2XAY4/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HV3BTFDHS4SFW3ZRTCBXXZ7LSW7IZAIF/


[ovirt-devel] master OST seems to be broken

2019-11-24 Thread Amit Bawer
Has anyone managed to pass OST lately?
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/H4FITQ7CDW244XENZXV5C3SXRY7Y2HDZ/


[ovirt-devel] Re: Failed to run VDSM tests locally using tox

2019-11-24 Thread Amit Bawer
usually when there is an issue with non found modules it's better to go
back to vdsm top dir
and

make clean
./autogen.sh
make

then try again

Also make sure you are running against the correct tested python version,
i.e. trying to run tox -e storage-py36 on fc30 with py37 would also fail
for non-found modules.

On Sun, Nov 24, 2019 at 11:10 AM Eyal Shenitzky  wrote:

> Hi,
>
> I am failing to run VDSM tests locally using tox, failed with the
> following error:
>
> ImportError while loading conftest
> '/home/eshenitz/git/vdsm/tests/storage/conftest.py'.
> storage/conftest.py:36: in 
> from vdsm import jobs
> ../lib/vdsm/jobs.py:29: in 
> from vdsm.config import config
> ../lib/vdsm/config.py:29: in 
> from vdsm.common.config import *  # NOQA: F401, F403
> E   ModuleNotFoundError: No module named 'vdsm.common.config'
>
> Did someone encounter this problem?
> I cannot run vdsm-tool configure --force because it exists only in
> python-2 version
>
> --
> Regards,
> Eyal Shenitzky
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/J5BDM6CSMZI6W263GPJALJMPZX6UYFJ6/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ICJKNWGEXFYYI4SU2GPUF7CSEKONIYCK/


[ovirt-devel] OST fails for collecting artifacts

2019-11-18 Thread Amit Bawer
Happens for several runs, full log can be seen at
http://jenkins.ovirt.org/job/ovirt-system-tests_manual/6057/artifact/exported-artifacts/test_logs/basic-suite-master/post-002_bootstrap.py/lago_logs/lago.log


2019-11-18 12:28:12,710::log_utils.py::end_log_task::670::root::ERROR::
 - [Thread-42] lago-basic-suite-master-engine:  [31mERROR [0m (in
0:00:08)
2019-11-18 12:28:12,731::log_utils.py::__exit__::607::lago.prefix::DEBUG::
 File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526, in
_collect_artifacts
vm.collect_artifacts(path, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
748, in collect_artifacts
ignore_nopath=ignore_nopath
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
468, in extract_paths
return self.provider.extract_paths(paths, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
line 398, in extract_paths
ignore_nopath=ignore_nopath,
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
253, in extract_paths
self._extract_paths_tar_gz(paths, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
102, in wrapper
return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
341, in _extract_paths_tar_gz
raise ExtractPathNoPathError(remote_path)

2019-11-18 12:28:12,731::utils.py::_ret_via_queue::63::lago.utils::DEBUG::Error
while running thread Thread-42
Traceback (most recent call last):
  File "/usr/lib/python2.7/site-packages/lago/utils.py", line 58, in
_ret_via_queue
queue.put({'return': func()})
  File "/usr/lib/python2.7/site-packages/lago/prefix.py", line 1526,
in _collect_artifacts
vm.collect_artifacts(path, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
748, in collect_artifacts
ignore_nopath=ignore_nopath
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
468, in extract_paths
return self.provider.extract_paths(paths, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/providers/libvirt/vm.py",
line 398, in extract_paths
ignore_nopath=ignore_nopath,
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
253, in extract_paths
self._extract_paths_tar_gz(paths, ignore_nopath)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
102, in wrapper
return func(self, *args, **kwargs)
  File "/usr/lib/python2.7/site-packages/lago/plugins/vm.py", line
341, in _extract_paths_tar_gz
raise ExtractPathNoPathError(remote_path)
ExtractPathNoPathError: Failed to extract files: /tmp/otopi*
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/PCS27XT3ODFL5ZRUPQMB5BFOW6BQXDGS/


[ovirt-devel] Re: Configure Local Storage On Host

2019-11-18 Thread Amit Bawer
On Mon, Nov 18, 2019 at 2:46 PM  wrote:

> I'm looking to do it with the SDK, not the UI.
>
worth to ticket it, didn't see documentation for local storage SDK.

> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/H6UZYPTAFLNNKYCJ2LSWREIUE2Q4BZ7I/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/A5JLQR4F6A56XM56K3DRRMSREZXTFAXQ/


[ovirt-devel] Re: Configure Local Storage On Host

2019-11-18 Thread Amit Bawer
On Mon, Nov 18, 2019 at 1:08 PM  wrote:

> Hi,
>
> I can add a local domain through the domains service portion of the SDK,
> however, I can't find an endpoint for Configure Local Storage in the
> management menu of the Host in the UI.
>
Have you tried to follow step 10 onwards as described in [1]?
[1] http://blog.domb.net/?p=2141

>
> Is this missing functionality? If so is there a ticket open for it?
>
> TIA
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WUIKFF4CIWWHW7SAECGQ7YXZ625F4ELD/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/CL2FMVOIZYT2YJF4K54S4TSNPPEY7RRM/


[ovirt-devel] Re: VDSM TestCommunicate.test_send_receive() failed

2019-11-11 Thread Amit Bawer
On Mon, Nov 11, 2019 at 3:25 PM Nir Soffer  wrote:

> On Mon, Nov 11, 2019 at 2:18 PM Amit Bawer  wrote:
> >
> > py3-mailbox changes are not in branch yet, so not related.
> >
> > From the failing test log i could see signs of relatively slow storage
> responses (couple 4KB r/w in 1.65, 1.77 seconds, while others take less
> than a second), so the test timeout could result from slow response times.
> >
> > 2019-11-11 09:23:17,699 DEBUG (mailbox-spm) [storage.Misc.excCmd]
> SUCCESS:  = '1+0 records in\n1+0 records out\n40960 bytes (41 kB, 40
> KiB) copied, 1.65805 s, 24.7 kB/s\n';  = 0 (commands:202)
> > 2019-11-11 09:23:17,702 DEBUG (mailbox-spm)
> [storage.MailBox.SpmMailMonitor] SPM_MailMonitor: Mailbox 7 validated,
> checking mail (mailbox:644)
> > 2019-11-11 09:23:17,703 DEBUG (mailbox-spm)
> [storage.MailBox.SpmMailMonitor] SPM_MailMonitor: processing request:
> '1xtnd\xe1_\xfeeT\x8a\x18\xb3\xe0JT\xe5^\xc8\xdb\x8a_Z%\xd8\xfcs.\xa4\xc3C\xbb>\xc6\xf1r\xd70064000'
> (mailbox:681)
> > 2019-11-11 09:23:17,715 DEBUG (mailbox-spm/0) [storage.ThreadPool]
> Number of running tasks: 1 (threadPool:60)
> > 2019-11-11 09:23:17,715 INFO  (mailbox-spm/0)
> [storage.ThreadPool.WorkerThread] START task
> 575a47fe-8b8b-465e-8bd7-80b31f09af69 (cmd= 0x7fbf67324b18>, args=(, 449,
> '1xtnd\xe1_\xfeeT\x8a\x18\xb3\xe0JT\xe5^\xc8\xdb\x8a_Z%\xd8\xfcs.\xa4\xc3C\xbb>\xc6\xf1r\xd70064000'))
> (threadPool:208)
> > 2019-11-11 09:23:17,717 DEBUG (mailbox-spm/0) [storage.TaskManager.Task]
> (Task='25518496-f685-40b9-b04b-8b50d1f26095') moving from state init ->
> state preparing (task:610)
> > 2019-11-11 09:23:17,718 DEBUG (mailbox-spm/0) [storage.TaskManager.Task]
> (Task='25518496-f685-40b9-b04b-8b50d1f26095') finished: None (task:1214)
> > 2019-11-11 09:23:17,721 DEBUG (mailbox-spm/0) [storage.TaskManager.Task]
> (Task='25518496-f685-40b9-b04b-8b50d1f26095') moving from state preparing
> -> state finished (task:610)
> > 2019-11-11 09:23:17,721 DEBUG (mailbox-spm/0)
> [storage.ResourceManager.Owner] Owner.releaseAll requests {} resources {}
> (resourceManager:913)
> > 2019-11-11 09:23:17,722 DEBUG (mailbox-spm/0)
> [storage.ResourceManager.Owner] Owner.cancelAll requests {}
> (resourceManager:950)
> > 2019-11-11 09:23:17,722 DEBUG (mailbox-spm/0) [storage.TaskManager.Task]
> (Task='25518496-f685-40b9-b04b-8b50d1f26095') ref 0 aborting False
> (task:1012)
> > 2019-11-11 09:23:17,722 INFO  (mailbox-spm/0)
> [storage.ThreadPool.WorkerThread] FINISH task
> 575a47fe-8b8b-465e-8bd7-80b31f09af69 (threadPool:210)
> > 2019-11-11 09:23:17,723 DEBUG (mailbox-spm/0) [storage.ThreadPool]
> Number of running tasks: 0 (threadPool:60)
> > 2019-11-11 09:23:17,907 DEBUG (mailbox-spm/4) [root] FINISH thread
>  (concurrent:196)
> > 2019-11-11 09:23:17,918 DEBUG (mailbox-spm/3) [root] FINISH thread
>  (concurrent:196)
> > 2019-11-11 09:23:17,923 DEBUG (mailbox-hsm) [storage.Misc.excCmd]
> SUCCESS:  = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB, 4.0
> KiB) copied, 1.77794 s, 2.3 kB/s\n';  = 0 (commands:202)
>
> It looks like CI storage is very slow, and we need larger timeouts in
> these tests.
> I think we use 6 seconds now, lets bump it to 10 seconds?
>

see patch: https://gerrit.ovirt.org/#/c/104556/

>
> >
> >
> > On Mon, Nov 11, 2019 at 12:41 PM Eyal Shenitzky 
> wrote:
> >>
> >> Hi,
> >>
> >> I see this failure in the VDSM check-patch [1] -
> >>
> >> 11:25:34 === FAILURES
> ===
> >> 11:25:34 __ TestCommunicate.test_send_receive
> ___
> >> 11:25:34
> >> 11:25:34 self =  0x7fbf6498e680>
> >> 11:25:34 mboxfiles =
> MboxFiles(inbox='/var/tmp/vdsm/test_send_receive0/inbox',
> outbox='/var/tmp/vdsm/test_send_receive0/outbox')
> >> 11:25:34
> >> 11:25:34 def test_send_receive(self, mboxfiles):
> >> 11:25:34 msg_processed = threading.Event()
> >> 11:25:34 expired = False
> >> 11:25:34 received_messages = []
> >> 11:25:34
> >> 11:25:34 def spm_callback(msg_id, data):
> >> 11:25:34 received_messages.append((msg_id, data))
> >> 11:25:34 msg_processed.set()
> >> 11:25:34
> >> 11:25:34 with make_hsm_mailbox(mboxfiles, 7) as hsm_mb:
> >> 11:25:34 with make_spm_mailbox(mboxfiles) as spm_mm:
> >> 11:25:34 spm_mm.registerMessageType(b"xtnd", spm_callback)
> >> 11:25:34 VOL_DATA = dict(
> &g

[ovirt-devel] Re: VDSM TestCommunicate.test_send_receive() failed

2019-11-11 Thread Amit Bawer
py3-mailbox changes are not in branch yet, so not related.

>From the failing test log i could see signs of relatively slow storage
responses (couple 4KB r/w in 1.65, 1.77 seconds, while others take less
than a second), so the test timeout could result from slow response times.

2019-11-11 09:23:17,699 DEBUG (mailbox-spm) [storage.Misc.excCmd]
SUCCESS:  = '1+0 records in\n1+0 records out\n40960 bytes (41 kB,
40 KiB) copied, 1.65805 s, 24.7 kB/s\n';  = 0 (commands:202)
2019-11-11 09:23:17,702 DEBUG (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: Mailbox 7 validated,
checking mail (mailbox:644)
2019-11-11 09:23:17,703 DEBUG (mailbox-spm)
[storage.MailBox.SpmMailMonitor] SPM_MailMonitor: processing request:
'1xtnd\xe1_\xfeeT\x8a\x18\xb3\xe0JT\xe5^\xc8\xdb\x8a_Z%\xd8\xfcs.\xa4\xc3C\xbb>\xc6\xf1r\xd70064000'
(mailbox:681)
2019-11-11 09:23:17,715 DEBUG (mailbox-spm/0) [storage.ThreadPool]
Number of running tasks: 1 (threadPool:60)
2019-11-11 09:23:17,715 INFO  (mailbox-spm/0)
[storage.ThreadPool.WorkerThread] START task
575a47fe-8b8b-465e-8bd7-80b31f09af69 (cmd=, args=(, 449,
'1xtnd\xe1_\xfeeT\x8a\x18\xb3\xe0JT\xe5^\xc8\xdb\x8a_Z%\xd8\xfcs.\xa4\xc3C\xbb>\xc6\xf1r\xd70064000'))
(threadPool:208)
2019-11-11 09:23:17,717 DEBUG (mailbox-spm/0)
[storage.TaskManager.Task]
(Task='25518496-f685-40b9-b04b-8b50d1f26095') moving from state init
-> state preparing (task:610)
2019-11-11 09:23:17,718 DEBUG (mailbox-spm/0)
[storage.TaskManager.Task]
(Task='25518496-f685-40b9-b04b-8b50d1f26095') finished: None
(task:1214)
2019-11-11 09:23:17,721 DEBUG (mailbox-spm/0)
[storage.TaskManager.Task]
(Task='25518496-f685-40b9-b04b-8b50d1f26095') moving from state
preparing -> state finished (task:610)
2019-11-11 09:23:17,721 DEBUG (mailbox-spm/0)
[storage.ResourceManager.Owner] Owner.releaseAll requests {} resources
{} (resourceManager:913)
2019-11-11 09:23:17,722 DEBUG (mailbox-spm/0)
[storage.ResourceManager.Owner] Owner.cancelAll requests {}
(resourceManager:950)
2019-11-11 09:23:17,722 DEBUG (mailbox-spm/0)
[storage.TaskManager.Task]
(Task='25518496-f685-40b9-b04b-8b50d1f26095') ref 0 aborting False
(task:1012)
2019-11-11 09:23:17,722 INFO  (mailbox-spm/0)
[storage.ThreadPool.WorkerThread] FINISH task
575a47fe-8b8b-465e-8bd7-80b31f09af69 (threadPool:210)
2019-11-11 09:23:17,723 DEBUG (mailbox-spm/0) [storage.ThreadPool]
Number of running tasks: 0 (threadPool:60)
2019-11-11 09:23:17,907 DEBUG (mailbox-spm/4) [root] FINISH thread

(concurrent:196)
2019-11-11 09:23:17,918 DEBUG (mailbox-spm/3) [root] FINISH thread

(concurrent:196)
2019-11-11 09:23:17,923 DEBUG (mailbox-hsm) [storage.Misc.excCmd]
SUCCESS:  = '1+0 records in\n1+0 records out\n4096 bytes (4.1 kB,
4.0 KiB) copied, 1.77794 s, 2.3 kB/s\n';  = 0 (commands:202)


On Mon, Nov 11, 2019 at 12:41 PM Eyal Shenitzky  wrote:

> Hi,
>
> I see this failure in the VDSM check-patch [1] -
>
> 11:25:34 === FAILURES
> ===
> 11:25:34 __ TestCommunicate.test_send_receive
> ___
> 11:25:34
> 11:25:34 self =  0x7fbf6498e680>
> 11:25:34 mboxfiles =
> MboxFiles(inbox='/var/tmp/vdsm/test_send_receive0/inbox',
> outbox='/var/tmp/vdsm/test_send_receive0/outbox')
> 11:25:34
> 11:25:34 def test_send_receive(self, mboxfiles):
> 11:25:34 msg_processed = threading.Event()
> 11:25:34 expired = False
> 11:25:34 received_messages = []
> 11:25:34
> 11:25:34 def spm_callback(msg_id, data):
> 11:25:34 received_messages.append((msg_id, data))
> 11:25:34 msg_processed.set()
> 11:25:34
> 11:25:34 with make_hsm_mailbox(mboxfiles, 7) as hsm_mb:
> 11:25:34 with make_spm_mailbox(mboxfiles) as spm_mm:
> 11:25:34 spm_mm.registerMessageType(b"xtnd", spm_callback)
> 11:25:34 VOL_DATA = dict(
> 11:25:34 poolID=SPUUID,
> 11:25:34 domainID='8adbc85e-e554-4ae0-b318-8a5465fe5fe1',
> 11:25:34 volumeID='d772f1c6-3ebb-43c3-a42e-73fcd8255a5f')
> 11:25:34 REQUESTED_SIZE = 100
> 11:25:34
> 11:25:34 hsm_mb.sendExtendMsg(VOL_DATA, REQUESTED_SIZE)
> 11:25:34
> 11:25:34 if not msg_processed.wait(10 * MONITOR_INTERVAL):
> 11:25:34 expired = True
> 11:25:34
> 11:25:34 > assert not expired, 'message was not processed on time'
> 11:25:34 E AssertionError: message was not processed on time
> 11:25:34 E assert not True
> 11:25:34
> 11:25:34 storage/mailbox_test.py:180: AssertionError
>
> Is this issue known to anyone?
>
> [1] -
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/14093/consoleFull
>
> Thanks
>
>
> --
> Regards,
> Eyal Shenitzky
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7L2UDMMPGV2CRSYJSXOOLQYIFMZX2NLN/
>

[ovirt-devel] Re: Proposing Marcin Sobczyk as VDSM infra maintainer

2019-11-07 Thread Amit Bawer
+1

On Thursday, November 7, 2019, Francesco Romani  wrote:

> On 11/7/19 3:13 PM, Martin Perina wrote:
>
> Hi,
>
> Marcin has joined infra team more than a year ago and during this time he
> contributed a lot to VDSM packaging, improved automation and ported all
> infra team parts of VDSM (jsonrpc, ssl, vdms-client, hooks infra, ...) to
> Python 3. He is a very nice person to talk, is usually very responsive and
> takes care a lot about code quality.
>
> So I'd like to propose Marcin as VDSM infra maintainer.
>
> Please share your thoughts.
>
>
> +1
>
> --
> Francesco Romani
> Senior SW Eng., Virtualization R&D
> Red Hat
> IRC: fromani github: @fromanirh
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FKTSHNJ7YLZWVTZJQ62HQPV3XGRWG2HU/


[ovirt-devel] Re: CI is not triggered for pushed gerrit updates

2019-09-25 Thread Amit Bawer
sorry, missed earlier mail reporting this is already resolved.

On Wed, Sep 25, 2019 at 12:42 PM Amit Bawer  wrote:

> CI has stopped from being triggered for pushed gerrit updates.
>
> Example at: https://gerrit.ovirt.org/#/c/103320/
> last PS did not trigger CI tests.
>
> Please advise.
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ZHCSQ5YL5CAJXS2SFFRPQELQE65LVWLH/


[ovirt-devel] CI is not triggered for pushed gerrit updates

2019-09-25 Thread Amit Bawer
CI has stopped from being triggered for pushed gerrit updates.

Example at: https://gerrit.ovirt.org/#/c/103320/
last PS did not trigger CI tests.

Please advise.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/32643PIZDJ77WJVAQPS6LC37W6Q2PAQ3/


[ovirt-devel] Re: unicode sandwich in otopi/engine-setup

2019-09-04 Thread Amit Bawer
On Wed, Sep 4, 2019 at 10:25 AM Yedidyah Bar David  wrote:

> On Thu, Aug 29, 2019 at 10:00 PM Amit Bawer  wrote:
> >
> >
> >
> > On Thu, Aug 29, 2019 at 11:41 AM Yedidyah Bar David 
> wrote:
> >>
> >> Hi all,
> >>
> >> This is in a sense a continuation of the thread "Why filetransaction
> >> needs to encode the content to utf-8?", but I decided that a new
> >> thread is better.
> >>
> >> I started to systematically convert the code to use a unicode
> >> sandwich. I admit it was harder than I expected, and made me think
> >> somewhat differently about the move to python3, and about how
> >> reasonable (or not) it is to develop in the common subset of python2
> >> and python3 vs ditching python2 and moving fully to python3. It seems
> >> like at least parts of our (integration team) code will still have to
> >> run in python2 also in oVirt 4.4, so I guess we'll not have much
> >> choice :-)
> >>
> >> Current patches are only for otopi and engine-setup, and are by no
> >> means thorough - I didn't check each and every open() call and similar
> >> ones. But it's enough for getting engine-setup finish successfully on
> >> both python2 and python3 (EL7 and Fedora 29), with some utf-8 inserted
> >> in relevant places of the input (for the plugins already handled).
> >>
> >> I didn't bother trying non-utf-8 encodings. Perhaps I should, but it's
> >> not completely clear to me what's the best approach [2].
> >
> >
> > A universal solution when dealing with sys.argv which could contain file
> paths/names in various languages,
> > would be selecting sys.getfilesystemencoding() for the encoding scheme
> instead of a hard coded 'utf-8' [3].
> > We've done something similar in sanlock python-c API for converting
> file-system paths into bytes, although it's in C,
> > the principle of using the file-system default encoding applies there as
> well [4].
>
> Thanks for the hint. Looked at this and thought a bit, and I tend to
> ignore/postpone until a need arises. We already have "utf-8" hard-coded
> in otopi 27 times, not sure it makes sense now to go after each and every
> one of them and analyze the more-general function (or expression, or even
> more complex) to replace it with. I guess this is only relevant for
> Windows,
> and I do not think anyone is going to try to port otopi to Windows soon.
>
> Searching for relevant keywords in google finds mostly results from around
> 2009-2012, which I guess was the time around which most systems converted
> their non-utf-8 file collections to utf-8. A somewhat newer example (2016):
>
> http://beets.io/blog/paths.html
>
> So I am going to ignore this. If you think that's a bad choice, please
> open a bug, and I'll handle it later. Thanks!
>

Default Linux locale encoding is UTF-8, so I don't think its a bad choice.


>
> For now, my top priority is to get otopi+engine-setup+host-deploy work
> well enough for:
>
> 1. Developers that use fedora for everything, or mix fedora and RHEL7/8
> (e.g. engine on one, host on another).
>
> 2. RHV 4.4, with hosts being RHEL8.
>
> Best regards,
>
> >
> > [3] https://stackoverflow.com/a/5113874
> > [4] https://pagure.io/sanlock/blob/master/f/python/sanlock.c#_76
> >
> >>
> >>
> >> Currently, you must have both otopi and engine updated to get things
> >> working. If there is demand, I might spend some time
> >> splitting/rebasing/etc to make it possible to update just one of them
> >> and only later the other, but not sure it's worth it.
> >>
> >> I don't mind splitting/squashing if it makes reviews simpler, but I
> >> think the patches are ok as-is. These are the bottom patches of each
> >> stack:
> >>
> >> otopi: https://gerrit.ovirt.org/102085
> >>
> >> engine-setup: https://gerrit.ovirt.org/102934
> >>
> >> [1] http://python-future.org/unicode_literals.html
> >>
> >> [2]
> https://stackoverflow.com/questions/4012571/python-which-encoding-is-used-for-processing-sys-argv
> >>
> >> Thanks and best regards,
> >> --
> >> Didi
>
>
>
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/2N6KHLGI4AYZ7T2IXEENYB3FSMELU6AJ/


[ovirt-devel] Re: unicode_literrals vs "u''" vs six.text_type

2019-09-02 Thread Amit Bawer
On Sun, Sep 1, 2019 at 8:23 PM Yedidyah Bar David  wrote:

> On Sun, Sep 1, 2019 at 3:37 PM Amit Bawer  wrote:
> >
> >
> >
> > On Sun, Sep 1, 2019 at 2:34 PM Yedidyah Bar David 
> wrote:
> >>
> >> On Sun, Sep 1, 2019 at 1:20 PM Amit Bawer  wrote:
> >> >
> >> >
> >> >
> >> > On Sun, Sep 1, 2019 at 10:28 AM Yedidyah Bar David 
> wrote:
> >> >>
> >> >> Hi all,
> >> >>
> >> >> That's a "sub-thread" of "unicode sandwich in otopi/engine-setup".
> >> >>
> >> >> I was recommended to use 'six.text_type() over "u''". I did read [1],
> >> >> and eventually decided that my own preference is to just add "u"
> >> >> prefix. Reasoning is inside [1].
> >> >>
> >> >> Do people have different preferences/reasoning they want to share?
> >> >>
> >> >> Do people think we should have project-wide policy re this?
> >> >
> >> >
> >> > Since our code is currently transitioning from py2 to py2/py3, and
> not from py3 to py3/py2, it would be fair to assume that most
> >> > already existing string literals in it contain ascii symbols, unless
> explicitly stated otherwise;
> >> > so IMO it would only make sense to enforce 'u' over newly added
> literals which involve non-ascii symbols as long as py2 is still alive.
> >>
> >> Not exactly.
> >>
> >> Suppose (mostly correctly) that the code didn't employ the "unicode
> >> sandwich" technique so far. Meaning, much was handled as python2 str
> >> objects containing utf-8-encoded strings, and converted to unicode
> >> objects mainly as needed/noted/considered. Suppose that x is a
> >> variable that used to contain such an str, usually ascii-only, but
> >> sometimes perhaps utf-8. Now, this:
> >>
> >> 'x: {}'.format(x)
> >>
> >> would work, and replace {} with the contents of x, and return a
> >> python2 str, utf-8-encoded if x is utf-8. But if now x contains a
> >> unicode object (because we decided to follow the sandwich approach,
> >> and encode all utf-8 during input), it would fail, if x is not
> >> ascii-only. Adding u to 'x: {}' solves this.
> >
> >
> > utf-8 is an ascii extension, meaning that first 128 ordinals agree for
> both encodings, so unicode sandwich has no negative effect on your example.
> > It would be only a problem only if input for x originally had a
> non-ascii character in it, but that should have been an issue for py2 in
> the first place, regardless to py3 sandwiches.
>
> Let me clarify:
>

Thanks, now i see where i was wrong.


> In python2:
>
> If I start with:
>
> x='א'
>

py2: x is 2 bytes: '\xd7\x90'
py3: x is unicode str with a single symbol '\u05d0'


> '{}'.format(x)
>
> Works.


py2: two bytes, each is < 128, so its fine.
py3: default unicode string, so its fine.


>
> If I then employ the sandwich, and therefore effectively change the code
> to be:
>
> x=u'א'
>

now py2 and py3 agree on contents of x, so sandwiching seems like the right
choice to make sure they treat x the same way.


> '{}'.format(x)
>
> Fails.
>
> To fix, I can change it to:
>
> u'{}'.format(x)
>

seems like a legit option to bridge the default encoding gap between py2
and py3


> Or, to import unicode_literals and keep the existing code line(s).
>
> Both work.
>
> In actual code, the assignment to x will/might be in a different
> module, and/or not contain a literal but user input, but '{}' _will_
> be a literal.
>
> Do people have preferences? Can people share their reasoning for their
> preferences? Do you think we should have policies, or it's up to each
> git repo, or even each patch author+maintainers/reviewers to decide?
>
> As discussed in the original [1], both have pros and cons. Personally
> I prefer "u''". But not strongly, because we try to keep our modules
> rather small, so it's not like you add a single import line that
> changes the semantics of hundreds or thousands of lines. Usually, it's
> rather easy to decide that such an import is ok. Ideally, we'd have
> full code coverage in our tests, including utf-8 everywhere, but I
> think we are quite far from that, for now.
>
> Thanks and best regards,
>
> >
> >>
> >> So I have to handle also all existing su

[ovirt-devel] Re: unicode_literrals vs "u''" vs six.text_type

2019-09-01 Thread Amit Bawer
On Sun, Sep 1, 2019 at 2:34 PM Yedidyah Bar David  wrote:

> On Sun, Sep 1, 2019 at 1:20 PM Amit Bawer  wrote:
> >
> >
> >
> > On Sun, Sep 1, 2019 at 10:28 AM Yedidyah Bar David 
> wrote:
> >>
> >> Hi all,
> >>
> >> That's a "sub-thread" of "unicode sandwich in otopi/engine-setup".
> >>
> >> I was recommended to use 'six.text_type() over "u''". I did read [1],
> >> and eventually decided that my own preference is to just add "u"
> >> prefix. Reasoning is inside [1].
> >>
> >> Do people have different preferences/reasoning they want to share?
> >>
> >> Do people think we should have project-wide policy re this?
> >
> >
> > Since our code is currently transitioning from py2 to py2/py3, and not
> from py3 to py3/py2, it would be fair to assume that most
> > already existing string literals in it contain ascii symbols, unless
> explicitly stated otherwise;
> > so IMO it would only make sense to enforce 'u' over newly added literals
> which involve non-ascii symbols as long as py2 is still alive.
>
> Not exactly.
>
> Suppose (mostly correctly) that the code didn't employ the "unicode
> sandwich" technique so far. Meaning, much was handled as python2 str
> objects containing utf-8-encoded strings, and converted to unicode
> objects mainly as needed/noted/considered. Suppose that x is a
> variable that used to contain such an str, usually ascii-only, but
> sometimes perhaps utf-8. Now, this:
>
> 'x: {}'.format(x)
>
> would work, and replace {} with the contents of x, and return a
> python2 str, utf-8-encoded if x is utf-8. But if now x contains a
> unicode object (because we decided to follow the sandwich approach,
> and encode all utf-8 during input), it would fail, if x is not
> ascii-only. Adding u to 'x: {}' solves this.
>

utf-8 is an ascii extension, meaning that first 128 ordinals agree for both
encodings, so unicode sandwich has no negative effect on your example.
It would be only a problem only if input for x originally had a non-ascii
character in it, but that should have been an issue for py2 in the first
place, regardless to py3 sandwiches.


> So I have to handle also all existing such literals, at least those
> that would now require handling unicode vars.
>
> >
> >>
> >>
> >> Personally, I do not see the big advantage of adding "six.text_type()"
> >> (15 chars) instead of a single "u". I do see where it can be useful,
> >> but not as a very long replacement, IMO, for "u", or for
> >> unicode_literals.
> >
> >
> > Once py2 will be officially terminated, probably neither option
> mentioned above would be meaningful as unicode is py3's default string
> encoding;
> > however IMO for literals it seems that an explicit 'u' is a more native
> approach, and provides clarity about the intentions of the programmer
> compared
> > to a global switch button in the form of import unicode_literals. Using
> six.text_type() is probably a good solution nowadays for variables and not
> literals,
> > and would probably have to die off some day after py2 does the same.
> >
> >>
> >>
> >> Thanks and best regards,
> >>
> >> [1] http://python-future.org/unicode_literals.html
> >> --
> >> Didi
> >> ___
> >> Devel mailing list -- devel@ovirt.org
> >> To unsubscribe send an email to devel-le...@ovirt.org
> >> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> >> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> >> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SW3P4VOGBP43N54CQEH3YURN6X5ZMWIX/
>
>
>
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/T5S3SCV23QNL67WKHVVLPXXL4AYNTW3M/


[ovirt-devel] Re: unicode_literrals vs "u''" vs six.text_type

2019-09-01 Thread Amit Bawer
On Sun, Sep 1, 2019 at 10:28 AM Yedidyah Bar David  wrote:

> Hi all,
>
> That's a "sub-thread" of "unicode sandwich in otopi/engine-setup".
>
> I was recommended to use 'six.text_type() over "u''". I did read [1],
> and eventually decided that my own preference is to just add "u"
> prefix. Reasoning is inside [1].
>
> Do people have different preferences/reasoning they want to share?
>
> Do people think we should have project-wide policy re this?
>

Since our code is currently transitioning from py2 to py2/py3, and not from
py3 to py3/py2, it would be fair to assume that most
already existing string literals in it contain ascii symbols, unless
explicitly stated otherwise;
so IMO it would only make sense to enforce 'u' over newly added literals
which involve non-ascii symbols as long as py2 is still alive.


>
> Personally, I do not see the big advantage of adding "six.text_type()"
> (15 chars) instead of a single "u". I do see where it can be useful,
> but not as a very long replacement, IMO, for "u", or for
> unicode_literals.
>

Once py2 will be officially terminated, probably neither option mentioned
above would be meaningful as unicode is py3's default string encoding;
however IMO for literals it seems that an explicit 'u' is a more native
approach, and provides clarity about the intentions of the programmer
compared
to a global switch button in the form of import unicode_literals. Using
six.text_type() is probably a good solution nowadays for variables and not
literals,
and would probably have to die off some day after py2 does the same.


>
> Thanks and best regards,
>
> [1] http://python-future.org/unicode_literals.html
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SW3P4VOGBP43N54CQEH3YURN6X5ZMWIX/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XOFCWPU4SJR2CCIJE72RGZMBZE6FI7XJ/


[ovirt-devel] Re: unicode sandwich in otopi/engine-setup

2019-08-29 Thread Amit Bawer
On Thu, Aug 29, 2019 at 11:41 AM Yedidyah Bar David  wrote:

> Hi all,
>
> This is in a sense a continuation of the thread "Why filetransaction
> needs to encode the content to utf-8?", but I decided that a new
> thread is better.
>
> I started to systematically convert the code to use a unicode
> sandwich. I admit it was harder than I expected, and made me think
> somewhat differently about the move to python3, and about how
> reasonable (or not) it is to develop in the common subset of python2
> and python3 vs ditching python2 and moving fully to python3. It seems
> like at least parts of our (integration team) code will still have to
> run in python2 also in oVirt 4.4, so I guess we'll not have much
> choice :-)
>
> Current patches are only for otopi and engine-setup, and are by no
> means thorough - I didn't check each and every open() call and similar
> ones. But it's enough for getting engine-setup finish successfully on
> both python2 and python3 (EL7 and Fedora 29), with some utf-8 inserted
> in relevant places of the input (for the plugins already handled).
>
> I didn't bother trying non-utf-8 encodings. Perhaps I should, but it's
> not completely clear to me what's the best approach [2].
>

A universal solution when dealing with sys.argv which could contain file
paths/names in various languages,
would be selecting sys.getfilesystemencoding() for the encoding scheme
instead of a hard coded 'utf-8' [3].
We've done something similar in sanlock python-c API for converting
file-system paths into bytes, although it's in C,
the principle of using the file-system default encoding applies there as
well [4].

[3] https://stackoverflow.com/a/5113874
[4] https://pagure.io/sanlock/blob/master/f/python/sanlock.c#_76


>
> Currently, you must have both otopi and engine updated to get things
> working. If there is demand, I might spend some time
> splitting/rebasing/etc to make it possible to update just one of them
> and only later the other, but not sure it's worth it.
>
> I don't mind splitting/squashing if it makes reviews simpler, but I
> think the patches are ok as-is. These are the bottom patches of each
> stack:
>
> otopi: https://gerrit.ovirt.org/102085
>
> engine-setup: https://gerrit.ovirt.org/102934
>
> [1] http://python-future.org/unicode_literals.html
>
> [2]
> https://stackoverflow.com/questions/4012571/python-which-encoding-is-used-for-processing-sys-argv
>
> Thanks and best regards,
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5OLW5LB2IX7VS6IIROCIM4DEHAFYSCTT/


[ovirt-devel] Re: vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-19 Thread Amit Bawer
Ok, thanks for the update.

On Mon, Aug 19, 2019 at 1:01 PM Emil Natan  wrote:

> As getting the s390x machine back could take some time, we have disabled
> the checks on that type of hardware.
>
> On Mon, Aug 19, 2019 at 12:45 PM Emil Natan  wrote:
>
>> I believe it'll take some time to get that fixed. We are currently
>> waiting for third party to help with that as we do not have access to the
>> machine to fix the issue. I'll update again as I know more.
>>
>> On Mon, Aug 19, 2019 at 11:41 AM Emil Natan  wrote:
>>
>>> Hi, I'm working on this now, I'll update when fixed.
>>>
>>> On Mon, Aug 19, 2019 at 10:59 AM Amit Bawer  wrote:
>>>
>>>> Quick/temp solution is also welcomed at this point as i need to verify
>>>> patches. Thanks
>>>>
>>>> On Mon, Aug 19, 2019 at 8:56 AM Amit Bawer  wrote:
>>>>
>>>>> Seems to happen again:
>>>>> https://jenkins.ovirt.org/job/standard-manual-runner/646/console
>>>>>
>>>>> On Sun, Aug 18, 2019 at 11:31 AM Amit Bawer  wrote:
>>>>>
>>>>>> Great, thanks for the update.
>>>>>>
>>>>>> On Sun, Aug 18, 2019 at 10:47 AM Ehud Yonasi 
>>>>>> wrote:
>>>>>>
>>>>>>> Hey,
>>>>>>> The problem was with mock cache filling up the filesystem and it is
>>>>>>> now cleaned.  The s390x slave is back online.
>>>>>>>
>>>>>>> On Sun, Aug 18, 2019 at 10:05 AM Daniel Belenky 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> Hi Nir,
>>>>>>>>
>>>>>>>> It seems that the reason behind this issue is that the s390x node
>>>>>>>> is offline.
>>>>>>>> I'm checking it right now and will update asap.
>>>>>>>>
>>>>>>>> On Sat, Aug 17, 2019 at 3:18 AM Nir Soffer 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> On Sat, Aug 17, 2019 at 3:06 AM Nir Soffer 
>>>>>>>>> wrote:
>>>>>>>>>
>>>>>>>>>> This looks like a bug, so adding infra-support - this will open a
>>>>>>>>>> ticket and someone will look into it.
>>>>>>>>>>
>>>>>>>>>> On Sat, Aug 17, 2019 at 1:12 AM Amit Bawer 
>>>>>>>>>> wrote:
>>>>>>>>>>
>>>>>>>>>>> Hi
>>>>>>>>>>> Unable to run CI builds and OSTs due to indefinite processing
>>>>>>>>>>> over fc29 Archiving artifacts phase
>>>>>>>>>>> Example run:
>>>>>>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>> We have 2 stuck builds:
>>>>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10454/
>>>>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/
>>>>>>>>>
>>>>>>>>> Both started 4 hours ago.
>>>>>>>>>
>>>>>>>>> It possible to abort jobs from jenkins UI, but usually it cause
>>>>>>>>> more trouble because
>>>>>>>>> of partial cleanup, so lets let infra team handle this properly.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>> ___
>>>>>>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>>>>>> oVirt Code of Conduct:
>>>>>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>>>>>> List Archives:
>>>>>>>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/
>>>>>>>>>>>
>>>>>>>>>> ___
>>>>>>>

[ovirt-devel] Re: vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-19 Thread Amit Bawer
Quick/temp solution is also welcomed at this point as i need to verify
patches. Thanks

On Mon, Aug 19, 2019 at 8:56 AM Amit Bawer  wrote:

> Seems to happen again:
> https://jenkins.ovirt.org/job/standard-manual-runner/646/console
>
> On Sun, Aug 18, 2019 at 11:31 AM Amit Bawer  wrote:
>
>> Great, thanks for the update.
>>
>> On Sun, Aug 18, 2019 at 10:47 AM Ehud Yonasi  wrote:
>>
>>> Hey,
>>> The problem was with mock cache filling up the filesystem and it is now
>>> cleaned.  The s390x slave is back online.
>>>
>>> On Sun, Aug 18, 2019 at 10:05 AM Daniel Belenky 
>>> wrote:
>>>
>>>> Hi Nir,
>>>>
>>>> It seems that the reason behind this issue is that the s390x node is
>>>> offline.
>>>> I'm checking it right now and will update asap.
>>>>
>>>> On Sat, Aug 17, 2019 at 3:18 AM Nir Soffer  wrote:
>>>>
>>>>> On Sat, Aug 17, 2019 at 3:06 AM Nir Soffer  wrote:
>>>>>
>>>>>> This looks like a bug, so adding infra-support - this will open a
>>>>>> ticket and someone will look into it.
>>>>>>
>>>>>> On Sat, Aug 17, 2019 at 1:12 AM Amit Bawer  wrote:
>>>>>>
>>>>>>> Hi
>>>>>>> Unable to run CI builds and OSTs due to indefinite processing over
>>>>>>> fc29 Archiving artifacts phase
>>>>>>> Example run:
>>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
>>>>>>>
>>>>>>
>>>>> We have 2 stuck builds:
>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10454/
>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/
>>>>>
>>>>> Both started 4 hours ago.
>>>>>
>>>>> It possible to abort jobs from jenkins UI, but usually it cause more
>>>>> trouble because
>>>>> of partial cleanup, so lets let infra team handle this properly.
>>>>>
>>>>>
>>>>>> ___
>>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct:
>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives:
>>>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/
>>>>>>>
>>>>>> ___
>>>>> Devel mailing list -- devel@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3LAOTG5RQDYQTYT7GBPLPSXT4WFHXETL/
>>>>>
>>>>
>>>>
>>>> --
>>>>
>>>> Daniel Belenky
>>>>
>>>> Red Hat <https://www.redhat.com/>
>>>> <https://red.ht/sig>
>>>>
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5A34H2SKDT2HB6VNQ22YFULHAFQQFZ4E/
>>>>
>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/HMLTPDVJKASFHO2Q4MDHNBG5QCOYFRBW/


[ovirt-devel] Re: vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-18 Thread Amit Bawer
Seems to happen again:
https://jenkins.ovirt.org/job/standard-manual-runner/646/console

On Sun, Aug 18, 2019 at 11:31 AM Amit Bawer  wrote:

> Great, thanks for the update.
>
> On Sun, Aug 18, 2019 at 10:47 AM Ehud Yonasi  wrote:
>
>> Hey,
>> The problem was with mock cache filling up the filesystem and it is now
>> cleaned.  The s390x slave is back online.
>>
>> On Sun, Aug 18, 2019 at 10:05 AM Daniel Belenky 
>> wrote:
>>
>>> Hi Nir,
>>>
>>> It seems that the reason behind this issue is that the s390x node is
>>> offline.
>>> I'm checking it right now and will update asap.
>>>
>>> On Sat, Aug 17, 2019 at 3:18 AM Nir Soffer  wrote:
>>>
>>>> On Sat, Aug 17, 2019 at 3:06 AM Nir Soffer  wrote:
>>>>
>>>>> This looks like a bug, so adding infra-support - this will open a
>>>>> ticket and someone will look into it.
>>>>>
>>>>> On Sat, Aug 17, 2019 at 1:12 AM Amit Bawer  wrote:
>>>>>
>>>>>> Hi
>>>>>> Unable to run CI builds and OSTs due to indefinite processing over
>>>>>> fc29 Archiving artifacts phase
>>>>>> Example run:
>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
>>>>>>
>>>>>
>>>> We have 2 stuck builds:
>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10454/
>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/
>>>>
>>>> Both started 4 hours ago.
>>>>
>>>> It possible to abort jobs from jenkins UI, but usually it cause more
>>>> trouble because
>>>> of partial cleanup, so lets let infra team handle this properly.
>>>>
>>>>
>>>>> ___
>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/
>>>>>>
>>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3LAOTG5RQDYQTYT7GBPLPSXT4WFHXETL/
>>>>
>>>
>>>
>>> --
>>>
>>> Daniel Belenky
>>>
>>> Red Hat <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>>
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5A34H2SKDT2HB6VNQ22YFULHAFQQFZ4E/
>>>
>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/NFXUYAVU733EV7W2QKXOANDRNSRCULV4/


[ovirt-devel] Re: vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-18 Thread Amit Bawer
Great, thanks for the update.

On Sun, Aug 18, 2019 at 10:47 AM Ehud Yonasi  wrote:

> Hey,
> The problem was with mock cache filling up the filesystem and it is now
> cleaned.  The s390x slave is back online.
>
> On Sun, Aug 18, 2019 at 10:05 AM Daniel Belenky 
> wrote:
>
>> Hi Nir,
>>
>> It seems that the reason behind this issue is that the s390x node is
>> offline.
>> I'm checking it right now and will update asap.
>>
>> On Sat, Aug 17, 2019 at 3:18 AM Nir Soffer  wrote:
>>
>>> On Sat, Aug 17, 2019 at 3:06 AM Nir Soffer  wrote:
>>>
>>>> This looks like a bug, so adding infra-support - this will open a
>>>> ticket and someone will look into it.
>>>>
>>>> On Sat, Aug 17, 2019 at 1:12 AM Amit Bawer  wrote:
>>>>
>>>>> Hi
>>>>> Unable to run CI builds and OSTs due to indefinite processing over
>>>>> fc29 Archiving artifacts phase
>>>>> Example run:
>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
>>>>>
>>>>
>>> We have 2 stuck builds:
>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10454/
>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/
>>>
>>> Both started 4 hours ago.
>>>
>>> It possible to abort jobs from jenkins UI, but usually it cause more
>>> trouble because
>>> of partial cleanup, so lets let infra team handle this properly.
>>>
>>>
>>>> ___
>>>>> Devel mailing list -- devel@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/
>>>>>
>>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3LAOTG5RQDYQTYT7GBPLPSXT4WFHXETL/
>>>
>>
>>
>> --
>>
>> Daniel Belenky
>>
>> Red Hat <https://www.redhat.com/>
>> <https://red.ht/sig>
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5A34H2SKDT2HB6VNQ22YFULHAFQQFZ4E/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/ATLCRQY6EAQ2LZ6VJWWPFIMMLDYDMFOM/


[ovirt-devel] vdsm_standard-check-patch is stuck on Archiving artifacts

2019-08-16 Thread Amit Bawer
Hi
Unable to run CI builds and OSTs due to indefinite processing over
fc29 Archiving
artifacts phase
Example run:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/10453/console
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JU3IZE2IQLVTVLRYCOOQZSQUC3QNLUD4/


[ovirt-devel] Re: [OST Failure Report] [oVirt master&4.3] [09-08-2019] [verify_glance_import]

2019-08-12 Thread Amit Bawer
On Mon, Aug 12, 2019 at 2:35 PM Eyal Edri  wrote:

>
>
> On Mon, Aug 12, 2019 at 11:57 AM Amit Bawer  wrote:
>
>> I experience the same issue when running OST on mainstream [1].
>> Is there any possible resolution/workaround?
>>
>
> This should be solved now, we reverted to use the old Glance instance.
>

Indeed, Thanks!


>
>
>>
>> [1]
>> https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5330/testReport/(root)/004_basic_sanity/verify_glance_import/
>>
>>
>> On Sat, Aug 10, 2019 at 10:55 PM Dusan Fodor  wrote:
>>
>>> Hi,
>>> we have failures in test verify glance import appearing in both 4.3 and
>>> master.
>>> Api call to get the disk status times out.
>>>
>>> Example of failed job can be seen here:
>>>
>>> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1808/
>>>
>>> Can you please take a look?
>>> ___
>>> Devel mailing list -- devel@ovirt.org
>>> To unsubscribe send an email to devel-le...@ovirt.org
>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>> oVirt Code of Conduct:
>>> https://www.ovirt.org/community/about/community-guidelines/
>>> List Archives:
>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3HCLG3J7CFET5B5NEPDFMLH4PDQBP6G4/
>>>
>> ___
>> Infra mailing list -- in...@ovirt.org
>> To unsubscribe send an email to infra-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/in...@ovirt.org/message/3QL6MJDW4MJCRVWF4NCGLQJ3DBT5NMSK/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/VW4B7TFBHHDXCSQ6LOJESBPRNG2PGO5H/


[ovirt-devel] Re: [OST Failure Report] [oVirt master&4.3] [09-08-2019] [verify_glance_import]

2019-08-12 Thread Amit Bawer
I experience the same issue when running OST on mainstream [1].
Is there any possible resolution/workaround?

[1]
https://jenkins.ovirt.org/job/ovirt-system-tests_manual/5330/testReport/(root)/004_basic_sanity/verify_glance_import/


On Sat, Aug 10, 2019 at 10:55 PM Dusan Fodor  wrote:

> Hi,
> we have failures in test verify glance import appearing in both 4.3 and
> master.
> Api call to get the disk status times out.
>
> Example of failed job can be seen here:
>
> https://jenkins.ovirt.org/view/Change%20queue%20jobs/job/ovirt-4.3_change-queue-tester/1808/
>
> Can you please take a look?
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3HCLG3J7CFET5B5NEPDFMLH4PDQBP6G4/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3QL6MJDW4MJCRVWF4NCGLQJ3DBT5NMSK/


[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-09 Thread Amit Bawer
On Fri, Aug 9, 2019 at 12:44 PM Vojtech Juranek  wrote:

> On čtvrtek 8. srpna 2019 14:57:12 CEST Amit Bawer wrote:
> > its not always applicable. for example like in poc where we need to get
> > same branch working in different envs and not wanting to deal with lots
> of
> > cherry-picks from different branches.
>
> as a workaround you can run tests in Travis which runs tests only for the
> latest commit.


This means fork github's vdsm?


> The flow can be (and I use it) - submit smaller batch of
> patches which are ready


Seems that I would still have to break a side branch into smaller branches
for gerrit topics sake.
And if there are review fixes, i'll have to make sure that both copies are
in sync.


> into gerrit and poke people to review them and merge,
>

In practice I have a single approver, so patches could be waiting for a
while.


> in meantime work on your current branch and push changes for testing into
> Travis.
>
> > On Thu, Aug 8, 2019 at 3:16 PM Milan Zamazal 
> wrote:
> > > Amit Bawer  writes:
> > > > On Thu, Aug 8, 2019 at 2:50 PM Marcin Sobczyk 
> > >
> > > wrote:
> > > >> On 8/8/19 1:44 PM, Amit Bawer wrote:
> > > >>
> > > >>
> > > >>
> > > >> On Thu, Aug 8, 2019 at 12:48 PM Milan Zamazal 
> > >
> > > wrote:
> > > >>> Amit Bawer  writes:
> > > >>> > On Wed, Aug 7, 2019 at 3:14 PM Nir Soffer 
> > >
> > > wrote:
> > > >>> >> On Wed, Aug 7, 2019 at 3:06 PM Amit Bawer 
> > >
> > > wrote:
> > > >>> >>> On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer 
> > >
> > > wrote:
> > > >>> >>>> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer 
> > >
> > > wrote:
> > > >>> >>>>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer <
> aba...@redhat.com>
> > > >>>
> > > >>> wrote:
> > > >>> >>>>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer <
> nsof...@redhat.com>
> > > >>>
> > > >>> wrote:
> > > >>> >>>>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer <
> aba...@redhat.com>
> > > >>>
> > > >>> wrote:
> > > >>> >>>>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer <
> nsof...@redhat.com
> > > >>> >>>>>>>>
> > > >>> >>>>>>>> wrote:
> > > >>> >>>>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer <
> > >
> > > aba...@redhat.com>
> > >
> > > >>> >>>>>>>>> wrote:
> > > >>> >>>>>>>>>> I have seen some improvement: when I re-trigger the CI
> per
> > > >>>
> > > >>> patch I
> > > >>>
> > > >>> >>>>>>>>>> am able to pass or get the actual test errors if any (if
> > > >>> >>>>>>>>>> not
> > > >>>
> > > >>> on first try,
> > > >>>
> > > >>> >>>>>>>>>> then on second one).
> > > >>> >>>>>>>>>> Probably not a very useful information, but I have
> noticed
> > >
> > > that
> > >
> > > >>> >>>>>>>>>> when I push 30+ patches at the same
> > > >>> >>>>>>>>>
> > > >>> >>>>>>>>> Do not do that, jenkins cannot handle 30 concurrent
> builds,
> > >
> > > and
> > >
> > > >>> is
> > > >>>
> > > >>> >>>>>>>>> it also bad for reviewers,
> > > >>> >>>>>>>>> getting several mails about every patch in your chain,
> for
> > >
> > > every
> > >
> > > >>> >>>>>>>>> rebase.
> > > >>> >>>>>>>>
> > > >>> >>>>>>>> Is there is a way to prevent CI from running per gerrit
> push
> > > >>> >>>>>>>> (without working on 30 different branches) ?
> > > >>> >>>>>>>
> > > >>> >&

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-08 Thread Amit Bawer
its not always applicable. for example like in poc where we need to get
same branch working in different envs and not wanting to deal with lots of
cherry-picks from different branches.

On Thu, Aug 8, 2019 at 3:16 PM Milan Zamazal  wrote:

> Amit Bawer  writes:
>
> > On Thu, Aug 8, 2019 at 2:50 PM Marcin Sobczyk 
> wrote:
> >
> >>
> >> On 8/8/19 1:44 PM, Amit Bawer wrote:
> >>
> >>
> >>
> >> On Thu, Aug 8, 2019 at 12:48 PM Milan Zamazal 
> wrote:
> >>
> >>> Amit Bawer  writes:
> >>>
> >>> > On Wed, Aug 7, 2019 at 3:14 PM Nir Soffer 
> wrote:
> >>> >
> >>> >> On Wed, Aug 7, 2019 at 3:06 PM Amit Bawer 
> wrote:
> >>> >>
> >>> >>>
> >>> >>>
> >>> >>> On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer 
> wrote:
> >>> >>>
> >>> >>>> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer 
> wrote:
> >>> >>>>
> >>> >>>>>
> >>> >>>>>
> >>> >>>>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer 
> >>> wrote:
> >>> >>>>>
> >>> >>>>>>
> >>> >>>>>>
> >>> >>>>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer 
> >>> wrote:
> >>> >>>>>>
> >>> >>>>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer 
> >>> wrote:
> >>> >>>>>>>
> >>> >>>>>>>>
> >>> >>>>>>>>
> >>> >>>>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer  >
> >>> >>>>>>>> wrote:
> >>> >>>>>>>>
> >>> >>>>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer <
> aba...@redhat.com>
> >>> >>>>>>>>> wrote:
> >>> >>>>>>>>>
> >>> >>>>>>>>>> I have seen some improvement: when I re-trigger the CI per
> >>> patch I
> >>> >>>>>>>>>> am able to pass or get the actual test errors if any (if not
> >>> on first try,
> >>> >>>>>>>>>> then on second one).
> >>> >>>>>>>>>> Probably not a very useful information, but I have noticed
> that
> >>> >>>>>>>>>> when I push 30+ patches at the same
> >>> >>>>>>>>>>
> >>> >>>>>>>>>
> >>> >>>>>>>>> Do not do that, jenkins cannot handle 30 concurrent builds,
> and
> >>> is
> >>> >>>>>>>>> it also bad for reviewers,
> >>> >>>>>>>>> getting several mails about every patch in your chain, for
> every
> >>> >>>>>>>>> rebase.
> >>> >>>>>>>>>
> >>> >>>>>>>>
> >>> >>>>>>>> Is there is a way to prevent CI from running per gerrit push
> >>> >>>>>>>> (without working on 30 different branches) ?
> >>> >>>>>>>>
> >>> >>>>>>>
> >>> >>>>>>> I don't know about such way.
> >>> >>>>>>>
> >>> >>>>>>
> >>> >>>>>> A legit option could be adding the Skip CI plugin to jenkins
> >>> plugins
> >>> >>>>>> [1]; with that devs can add "[skip ci]" to their commit messages
> >>> to prevent
> >>> >>>>>> jenkins from automatically launching CI upon push.
> >>> >>>>>>
> >>> >>>>>
> >>> >>>> Do you want to modify the commit message for every patch to decide
> >>> if ci
> >>> >>>> should run or not?
> >>> >>>>
> >>> >>>
> >>> >>> I think that having the option to knowingly disable automated CI in
> >>> some
> >>> >>> cases is useful. We could always re-trigger when time is right [3].
> >>> >>> [3]
> https://jenkins.ovirt.org/log

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-08 Thread Amit Bawer
On Thu, Aug 8, 2019 at 2:50 PM Marcin Sobczyk  wrote:

>
> On 8/8/19 1:44 PM, Amit Bawer wrote:
>
>
>
> On Thu, Aug 8, 2019 at 12:48 PM Milan Zamazal  wrote:
>
>> Amit Bawer  writes:
>>
>> > On Wed, Aug 7, 2019 at 3:14 PM Nir Soffer  wrote:
>> >
>> >> On Wed, Aug 7, 2019 at 3:06 PM Amit Bawer  wrote:
>> >>
>> >>>
>> >>>
>> >>> On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer  wrote:
>> >>>
>> >>>> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer  wrote:
>> >>>>
>> >>>>>
>> >>>>>
>> >>>>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer 
>> wrote:
>> >>>>>
>> >>>>>>
>> >>>>>>
>> >>>>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer 
>> wrote:
>> >>>>>>
>> >>>>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer 
>> wrote:
>> >>>>>>>
>> >>>>>>>>
>> >>>>>>>>
>> >>>>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer 
>> >>>>>>>> wrote:
>> >>>>>>>>
>> >>>>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer 
>> >>>>>>>>> wrote:
>> >>>>>>>>>
>> >>>>>>>>>> I have seen some improvement: when I re-trigger the CI per
>> patch I
>> >>>>>>>>>> am able to pass or get the actual test errors if any (if not
>> on first try,
>> >>>>>>>>>> then on second one).
>> >>>>>>>>>> Probably not a very useful information, but I have noticed that
>> >>>>>>>>>> when I push 30+ patches at the same
>> >>>>>>>>>>
>> >>>>>>>>>
>> >>>>>>>>> Do not do that, jenkins cannot handle 30 concurrent builds, and
>> is
>> >>>>>>>>> it also bad for reviewers,
>> >>>>>>>>> getting several mails about every patch in your chain, for every
>> >>>>>>>>> rebase.
>> >>>>>>>>>
>> >>>>>>>>
>> >>>>>>>> Is there is a way to prevent CI from running per gerrit push
>> >>>>>>>> (without working on 30 different branches) ?
>> >>>>>>>>
>> >>>>>>>
>> >>>>>>> I don't know about such way.
>> >>>>>>>
>> >>>>>>
>> >>>>>> A legit option could be adding the Skip CI plugin to jenkins
>> plugins
>> >>>>>> [1]; with that devs can add "[skip ci]" to their commit messages
>> to prevent
>> >>>>>> jenkins from automatically launching CI upon push.
>> >>>>>>
>> >>>>>
>> >>>> Do you want to modify the commit message for every patch to decide
>> if ci
>> >>>> should run or not?
>> >>>>
>> >>>
>> >>> I think that having the option to knowingly disable automated CI in
>> some
>> >>> cases is useful. We could always re-trigger when time is right [3].
>> >>> [3] https://jenkins.ovirt.org/login?from=%2Fgerrit_manual_trigger%2F
>> >>>
>> >>
>> >> This is too much work, but I think today we can add a comment to gerrit
>> >> like
>> >>
>> >> ci please test
>> >>
>> >> That will trigger a build of this patch.
>> >>
>> >
>> > Indeed, but it leaves the "Continuous-Integration" mark untouched in
>> > gerrit, giving the wrong impression this patch is still CI failed.
>>
>> No, it updates CI score.  I use it routinely with falsely failed tests.
>>
>> In my experience, CI score may not get updated if there are concurrent
>> builds, such as when you upload a new version of a patch while CI is
>> still running on the previous version.
>>
>
> I may have missed something, but i tried "ci build" gerrit comment on one
> of the CI failed patches https://gerrit.ovirt.org/#/c/101357/
> the CI build passed, but CI indicator is still -1. AFAICT I had no ot

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-08 Thread Amit Bawer
On Thu, Aug 8, 2019 at 12:48 PM Milan Zamazal  wrote:

> Amit Bawer  writes:
>
> > On Wed, Aug 7, 2019 at 3:14 PM Nir Soffer  wrote:
> >
> >> On Wed, Aug 7, 2019 at 3:06 PM Amit Bawer  wrote:
> >>
> >>>
> >>>
> >>> On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer  wrote:
> >>>
> >>>> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer  wrote:
> >>>>
> >>>>>
> >>>>>
> >>>>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer 
> wrote:
> >>>>>
> >>>>>>
> >>>>>>
> >>>>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer 
> wrote:
> >>>>>>
> >>>>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer 
> wrote:
> >>>>>>>
> >>>>>>>>
> >>>>>>>>
> >>>>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer 
> >>>>>>>> wrote:
> >>>>>>>>
> >>>>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer 
> >>>>>>>>> wrote:
> >>>>>>>>>
> >>>>>>>>>> I have seen some improvement: when I re-trigger the CI per
> patch I
> >>>>>>>>>> am able to pass or get the actual test errors if any (if not on
> first try,
> >>>>>>>>>> then on second one).
> >>>>>>>>>> Probably not a very useful information, but I have noticed that
> >>>>>>>>>> when I push 30+ patches at the same
> >>>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> Do not do that, jenkins cannot handle 30 concurrent builds, and
> is
> >>>>>>>>> it also bad for reviewers,
> >>>>>>>>> getting several mails about every patch in your chain, for every
> >>>>>>>>> rebase.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Is there is a way to prevent CI from running per gerrit push
> >>>>>>>> (without working on 30 different branches) ?
> >>>>>>>>
> >>>>>>>
> >>>>>>> I don't know about such way.
> >>>>>>>
> >>>>>>
> >>>>>> A legit option could be adding the Skip CI plugin to jenkins plugins
> >>>>>> [1]; with that devs can add "[skip ci]" to their commit messages to
> prevent
> >>>>>> jenkins from automatically launching CI upon push.
> >>>>>>
> >>>>>
> >>>> Do you want to modify the commit message for every patch to decide if
> ci
> >>>> should run or not?
> >>>>
> >>>
> >>> I think that having the option to knowingly disable automated CI in
> some
> >>> cases is useful. We could always re-trigger when time is right [3].
> >>> [3] https://jenkins.ovirt.org/login?from=%2Fgerrit_manual_trigger%2F
> >>>
> >>
> >> This is too much work, but I think today we can add a comment to gerrit
> >> like
> >>
> >> ci please test
> >>
> >> That will trigger a build of this patch.
> >>
> >
> > Indeed, but it leaves the "Continuous-Integration" mark untouched in
> > gerrit, giving the wrong impression this patch is still CI failed.
>
> No, it updates CI score.  I use it routinely with falsely failed tests.
>
> In my experience, CI score may not get updated if there are concurrent
> builds, such as when you upload a new version of a patch while CI is
> still running on the previous version.
>

I may have missed something, but i tried "ci build" gerrit comment on one
of the CI failed patches https://gerrit.ovirt.org/#/c/101357/
the CI build passed, but CI indicator is still -1. AFAICT I had no other CI
jobs running at the time.


>
> > The re-trigger UI takes care for that as well.
> >
> >
> >>
> >>
> >>>
> >>>
> >>>>
> >>>>> Another option is to emulate the behaviour in the existing gerrit
> >>>>>> plugin (guess there is already such one in ovirt's jenkins), for
> example
> >>>>>> skipping by a topic regex [2].
> >>>>>>
> >>>>>

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-07 Thread Amit Bawer
On Wed, Aug 7, 2019 at 3:14 PM Nir Soffer  wrote:

> On Wed, Aug 7, 2019 at 3:06 PM Amit Bawer  wrote:
>
>>
>>
>> On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer  wrote:
>>
>>> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer  wrote:
>>>
>>>>
>>>>
>>>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer  wrote:
>>>>>
>>>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer  wrote:
>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer 
>>>>>>> wrote:
>>>>>>>
>>>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> I have seen some improvement: when I re-trigger the CI per patch I
>>>>>>>>> am able to pass or get the actual test errors if any (if not on first 
>>>>>>>>> try,
>>>>>>>>> then on second one).
>>>>>>>>> Probably not a very useful information, but I have noticed that
>>>>>>>>> when I push 30+ patches at the same
>>>>>>>>>
>>>>>>>>
>>>>>>>> Do not do that, jenkins cannot handle 30 concurrent builds, and is
>>>>>>>> it also bad for reviewers,
>>>>>>>> getting several mails about every patch in your chain, for every
>>>>>>>> rebase.
>>>>>>>>
>>>>>>>
>>>>>>> Is there is a way to prevent CI from running per gerrit push
>>>>>>> (without working on 30 different branches) ?
>>>>>>>
>>>>>>
>>>>>> I don't know about such way.
>>>>>>
>>>>>
>>>>> A legit option could be adding the Skip CI plugin to jenkins plugins
>>>>> [1]; with that devs can add "[skip ci]" to their commit messages to 
>>>>> prevent
>>>>> jenkins from automatically launching CI upon push.
>>>>>
>>>>
>>> Do you want to modify the commit message for every patch to decide if ci
>>> should run or not?
>>>
>>
>> I think that having the option to knowingly disable automated CI in some
>> cases is useful. We could always re-trigger when time is right [3].
>> [3] https://jenkins.ovirt.org/login?from=%2Fgerrit_manual_trigger%2F
>>
>
> This is too much work, but I think today we can add a comment to gerrit
> like
>
> ci please test
>
> That will trigger a build of this patch.
>

Indeed, but it leaves the "Continuous-Integration" mark untouched in
gerrit, giving the wrong impression this patch is still CI failed. The
re-trigger UI takes care for that as well.


>
>
>>
>>
>>>
>>>> Another option is to emulate the behaviour in the existing gerrit
>>>>> plugin (guess there is already such one in ovirt's jenkins), for example
>>>>> skipping by a topic regex [2].
>>>>>
>>>>
>>> Not clear how this will help.
>>>
>>
>> If I make a gerrit topic with some name like "my_feature_skip_ci" I can
>> control whether I want to have automated CI for its patches.
>> When I want to go back to normal I can rename it to "my_feature" and have
>> CI per push as usual.
>>
>>
>>> I think a possible solution can be running only the top patch in a
>>> changeset, same way we have in travis,
>>> and the same way systems that grab patches from mailing lists work.
>>> Every post to gerrit will trigger one
>>> build, instead of one build per patch in the chain.
>>>
>>
>> That could do as well.
>>
>>
>>> Of course this will allow merging broken patches that are fixed by a
>>> later patch in the chain, which
>>> is also not ideal, but it is better given our restricted resources.
>>>
>>
>> We can re-trigger CI manually in this case as part of the verification
>> process.
>>
>
>>
>>> +Anton Marchukov   I have been told you might be
>>>> familiar with a similar solution.
>>>>
>>>>>
>>>>> [1] https://plugins.jenkins.io/ci-skip
>>>>> [2]
>>>>> https

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-07 Thread Amit Bawer
On Wed, Aug 7, 2019 at 2:53 PM Nir Soffer  wrote:

> On Wed, Aug 7, 2019 at 1:23 PM Amit Bawer  wrote:
>
>>
>>
>> On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer  wrote:
>>
>>>
>>>
>>> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer  wrote:
>>>
>>>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer  wrote:
>>>>
>>>>>
>>>>>
>>>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer  wrote:
>>>>>
>>>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer  wrote:
>>>>>>
>>>>>>> I have seen some improvement: when I re-trigger the CI per patch I
>>>>>>> am able to pass or get the actual test errors if any (if not on first 
>>>>>>> try,
>>>>>>> then on second one).
>>>>>>> Probably not a very useful information, but I have noticed that when
>>>>>>> I push 30+ patches at the same
>>>>>>>
>>>>>>
>>>>>> Do not do that, jenkins cannot handle 30 concurrent builds, and is it
>>>>>> also bad for reviewers,
>>>>>> getting several mails about every patch in your chain, for every
>>>>>> rebase.
>>>>>>
>>>>>
>>>>> Is there is a way to prevent CI from running per gerrit push (without
>>>>> working on 30 different branches) ?
>>>>>
>>>>
>>>> I don't know about such way.
>>>>
>>>
>>> A legit option could be adding the Skip CI plugin to jenkins plugins
>>> [1]; with that devs can add "[skip ci]" to their commit messages to prevent
>>> jenkins from automatically launching CI upon push.
>>>
>>
> Do you want to modify the commit message for every patch to decide if ci
> should run or not?
>

I think that having the option to knowingly disable automated CI in some
cases is useful. We could always re-trigger when time is right [3].
[3] https://jenkins.ovirt.org/login?from=%2Fgerrit_manual_trigger%2F


>
>> Another option is to emulate the behaviour in the existing gerrit plugin
>>> (guess there is already such one in ovirt's jenkins), for example skipping
>>> by a topic regex [2].
>>>
>>
> Not clear how this will help.
>

If I make a gerrit topic with some name like "my_feature_skip_ci" I can
control whether I want to have automated CI for its patches.
When I want to go back to normal I can rename it to "my_feature" and have
CI per push as usual.


> I think a possible solution can be running only the top patch in a
> changeset, same way we have in travis,
> and the same way systems that grab patches from mailing lists work. Every
> post to gerrit will trigger one
> build, instead of one build per patch in the chain.
>

That could do as well.


> Of course this will allow merging broken patches that are fixed by a later
> patch in the chain, which
> is also not ideal, but it is better given our restricted resources.
>

We can re-trigger CI manually in this case as part of the verification
process.


> +Anton Marchukov   I have been told you might be
>> familiar with a similar solution.
>>
>>>
>>> [1] https://plugins.jenkins.io/ci-skip
>>> [2]
>>> https://stackoverflow.com/questions/37807941/how-can-i-get-jenkins-gerrit-trigger-to-ignore-my-ci-users-commits
>>>
>>>
>>>>
>>>> I'm using keeping several small active branches. While you wait for
>>>> reviews on one topic
>>>> you can work on the other branches.
>>>>
>>>>
>>>>
>>>>>
>>>>>>
>>>>>>> time the AWS connection issue arises constantly.
>>>>>>>
>>>>>>> On Sun, Aug 4, 2019 at 4:49 PM Eyal Edri  wrote:
>>>>>>>
>>>>>>>> This was reported already and AFAIK its a network issue between AWS
>>>>>>>> and PHX which is still being investigated.
>>>>>>>> Evgheni, any insights or update on the issue? should we involve
>>>>>>>> debugging from amazon side?
>>>>>>>>
>>>>>>>> On Sun, Aug 4, 2019 at 4:46 PM Amit Bawer 
>>>>>>>> wrote:
>>>>>>>>
>>>>>>>>> Hi,
>>>>>>>>> CI seems to fail constantly for unavailable remote gerrit
>>>>>>>>> repository.
>>>>>>>>> Example can 

[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-07 Thread Amit Bawer
On Wed, Aug 7, 2019 at 11:19 AM Amit Bawer  wrote:

>
>
> On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer  wrote:
>
>> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer  wrote:
>>
>>>
>>>
>>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer  wrote:
>>>
>>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer  wrote:
>>>>
>>>>> I have seen some improvement: when I re-trigger the CI per patch I am
>>>>> able to pass or get the actual test errors if any (if not on first try,
>>>>> then on second one).
>>>>> Probably not a very useful information, but I have noticed that when I
>>>>> push 30+ patches at the same
>>>>>
>>>>
>>>> Do not do that, jenkins cannot handle 30 concurrent builds, and is it
>>>> also bad for reviewers,
>>>> getting several mails about every patch in your chain, for every rebase.
>>>>
>>>
>>> Is there is a way to prevent CI from running per gerrit push (without
>>> working on 30 different branches) ?
>>>
>>
>> I don't know about such way.
>>
>
> A legit option could be adding the Skip CI plugin to jenkins plugins [1];
> with that devs can add "[skip ci]" to their commit messages to prevent
> jenkins from automatically launching CI upon push.
> Another option is to emulate the behaviour in the existing gerrit plugin
> (guess there is already such one in ovirt's jenkins), for example skipping
> by a topic regex [2].
>

+Anton Marchukov   I have been told you might be
familiar with a similar solution.

>
> [1] https://plugins.jenkins.io/ci-skip
> [2]
> https://stackoverflow.com/questions/37807941/how-can-i-get-jenkins-gerrit-trigger-to-ignore-my-ci-users-commits
>
>
>>
>> I'm using keeping several small active branches. While you wait for
>> reviews on one topic
>> you can work on the other branches.
>>
>>
>>
>>>
>>>>
>>>>> time the AWS connection issue arises constantly.
>>>>>
>>>>> On Sun, Aug 4, 2019 at 4:49 PM Eyal Edri  wrote:
>>>>>
>>>>>> This was reported already and AFAIK its a network issue between AWS
>>>>>> and PHX which is still being investigated.
>>>>>> Evgheni, any insights or update on the issue? should we involve
>>>>>> debugging from amazon side?
>>>>>>
>>>>>> On Sun, Aug 4, 2019 at 4:46 PM Amit Bawer  wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>> CI seems to fail constantly for unavailable remote gerrit repository.
>>>>>>> Example can be seen here:
>>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9415/console
>>>>>>> ___
>>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>>> oVirt Code of Conduct:
>>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>>> List Archives:
>>>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AHPHUZAABAQNWEMD2JQ6WARHJRDTYCPI/
>>>>>>>
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> Eyal edri
>>>>>>
>>>>>> He / Him / His
>>>>>>
>>>>>>
>>>>>> MANAGER
>>>>>>
>>>>>> CONTINUOUS PRODUCTIZATION
>>>>>>
>>>>>> SYSTEM ENGINEERING
>>>>>>
>>>>>> Red Hat <https://www.redhat.com/>
>>>>>> <https://red.ht/sig>
>>>>>> phone: +972-9-7692018
>>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>>>>
>>>>> ___
>>>>> Devel mailing list -- devel@ovirt.org
>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>> oVirt Code of Conduct:
>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>> List Archives:
>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W6DUMIUSN5DPUVGUFUNHF2ZALB5I4JPZ/
>>>>>
>>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/O35NRDUFYTSLU3XLQLDZZ5SUM3XUJQ3N/


[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-07 Thread Amit Bawer
On Tue, Aug 6, 2019 at 5:07 PM Nir Soffer  wrote:

> On Tue, Aug 6, 2019 at 5:01 PM Amit Bawer  wrote:
>
>>
>>
>> On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer  wrote:
>>
>>> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer  wrote:
>>>
>>>> I have seen some improvement: when I re-trigger the CI per patch I am
>>>> able to pass or get the actual test errors if any (if not on first try,
>>>> then on second one).
>>>> Probably not a very useful information, but I have noticed that when I
>>>> push 30+ patches at the same
>>>>
>>>
>>> Do not do that, jenkins cannot handle 30 concurrent builds, and is it
>>> also bad for reviewers,
>>> getting several mails about every patch in your chain, for every rebase.
>>>
>>
>> Is there is a way to prevent CI from running per gerrit push (without
>> working on 30 different branches) ?
>>
>
> I don't know about such way.
>

A legit option could be adding the Skip CI plugin to jenkins plugins [1];
with that devs can add "[skip ci]" to their commit messages to prevent
jenkins from automatically launching CI upon push.
Another option is to emulate the behaviour in the existing gerrit plugin
(guess there is already such one in ovirt's jenkins), for example skipping
by a topic regex [2].

[1] https://plugins.jenkins.io/ci-skip
[2]
https://stackoverflow.com/questions/37807941/how-can-i-get-jenkins-gerrit-trigger-to-ignore-my-ci-users-commits


>
> I'm using keeping several small active branches. While you wait for
> reviews on one topic
> you can work on the other branches.
>
>
>
>>
>>>
>>>> time the AWS connection issue arises constantly.
>>>>
>>>> On Sun, Aug 4, 2019 at 4:49 PM Eyal Edri  wrote:
>>>>
>>>>> This was reported already and AFAIK its a network issue between AWS
>>>>> and PHX which is still being investigated.
>>>>> Evgheni, any insights or update on the issue? should we involve
>>>>> debugging from amazon side?
>>>>>
>>>>> On Sun, Aug 4, 2019 at 4:46 PM Amit Bawer  wrote:
>>>>>
>>>>>> Hi,
>>>>>> CI seems to fail constantly for unavailable remote gerrit repository.
>>>>>> Example can be seen here:
>>>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9415/console
>>>>>> ___
>>>>>> Devel mailing list -- devel@ovirt.org
>>>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>>>> oVirt Code of Conduct:
>>>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>>>> List Archives:
>>>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AHPHUZAABAQNWEMD2JQ6WARHJRDTYCPI/
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>>
>>>>> Eyal edri
>>>>>
>>>>> He / Him / His
>>>>>
>>>>>
>>>>> MANAGER
>>>>>
>>>>> CONTINUOUS PRODUCTIZATION
>>>>>
>>>>> SYSTEM ENGINEERING
>>>>>
>>>>> Red Hat <https://www.redhat.com/>
>>>>> <https://red.ht/sig>
>>>>> phone: +972-9-7692018
>>>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>>>
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W6DUMIUSN5DPUVGUFUNHF2ZALB5I4JPZ/
>>>>
>>>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/FIQNCVGEKADDZ3XC65QYSI5TCCTPEIKS/


[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-06 Thread Amit Bawer
On Tue, Aug 6, 2019 at 4:58 PM Nir Soffer  wrote:

> On Tue, Aug 6, 2019 at 11:27 AM Amit Bawer  wrote:
>
>> I have seen some improvement: when I re-trigger the CI per patch I am
>> able to pass or get the actual test errors if any (if not on first try,
>> then on second one).
>> Probably not a very useful information, but I have noticed that when I
>> push 30+ patches at the same
>>
>
> Do not do that, jenkins cannot handle 30 concurrent builds, and is it also
> bad for reviewers,
> getting several mails about every patch in your chain, for every rebase.
>

Is there is a way to prevent CI from running per gerrit push (without
working on 30 different branches) ?

>
>
>> time the AWS connection issue arises constantly.
>>
>> On Sun, Aug 4, 2019 at 4:49 PM Eyal Edri  wrote:
>>
>>> This was reported already and AFAIK its a network issue between AWS and
>>> PHX which is still being investigated.
>>> Evgheni, any insights or update on the issue? should we involve
>>> debugging from amazon side?
>>>
>>> On Sun, Aug 4, 2019 at 4:46 PM Amit Bawer  wrote:
>>>
>>>> Hi,
>>>> CI seems to fail constantly for unavailable remote gerrit repository.
>>>> Example can be seen here:
>>>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9415/console
>>>> ___
>>>> Devel mailing list -- devel@ovirt.org
>>>> To unsubscribe send an email to devel-le...@ovirt.org
>>>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>>>> oVirt Code of Conduct:
>>>> https://www.ovirt.org/community/about/community-guidelines/
>>>> List Archives:
>>>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AHPHUZAABAQNWEMD2JQ6WARHJRDTYCPI/
>>>>
>>>
>>>
>>> --
>>>
>>> Eyal edri
>>>
>>> He / Him / His
>>>
>>>
>>> MANAGER
>>>
>>> CONTINUOUS PRODUCTIZATION
>>>
>>> SYSTEM ENGINEERING
>>>
>>> Red Hat <https://www.redhat.com/>
>>> <https://red.ht/sig>
>>> phone: +972-9-7692018
>>> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W6DUMIUSN5DPUVGUFUNHF2ZALB5I4JPZ/
>>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/7GWYIBO65E6BHTVTKYVTLZKBPMTQ4LDC/


[ovirt-devel] Re: CI: vdsm-standard-check-patch fails

2019-08-06 Thread Amit Bawer
I have seen some improvement: when I re-trigger the CI per patch I am able
to pass or get the actual test errors if any (if not on first try, then on
second one).
Probably not a very useful information, but I have noticed that when I push
30+ patches at the same time the AWS connection issue arises constantly.

On Sun, Aug 4, 2019 at 4:49 PM Eyal Edri  wrote:

> This was reported already and AFAIK its a network issue between AWS and
> PHX which is still being investigated.
> Evgheni, any insights or update on the issue? should we involve debugging
> from amazon side?
>
> On Sun, Aug 4, 2019 at 4:46 PM Amit Bawer  wrote:
>
>> Hi,
>> CI seems to fail constantly for unavailable remote gerrit repository.
>> Example can be seen here:
>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9415/console
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AHPHUZAABAQNWEMD2JQ6WARHJRDTYCPI/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/W6DUMIUSN5DPUVGUFUNHF2ZALB5I4JPZ/


[ovirt-devel] CI: vdsm-standard-check-patch fails

2019-08-04 Thread Amit Bawer
Hi,
CI seems to fail constantly for unavailable remote gerrit repository.
Example can be seen here:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9415/console
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AHPHUZAABAQNWEMD2JQ6WARHJRDTYCPI/


[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-31 Thread Amit Bawer
Many thanks Marcin. Attending to some patches in the meanwhile.

On Wed, Jul 31, 2019 at 3:12 PM Marcin Sobczyk  wrote:

>
> On 7/31/19 12:33 PM, Amit Bawer wrote:
>
>
>
> On Wed, Jul 31, 2019 at 1:11 PM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> > On 30 Jul 2019, at 16:08, Milan Zamazal  wrote:
>> >
>> > Amit Bawer  writes:
>> >
>> >> Cherry-picked (locally)  'py3-hooks' pending gerrit patches on top of
>> >> 'py3_poc' branch.
>> >>
>> >> Able to start VM ,
>> >
>> > Cool!
>> >
>> >> but cannot connect graphics console - when trying it shows blank
>> >> screen with "connecting to graphics sever" and nothing happens.
>> >
>> > Did you try it with VNC console?  There is better chance with VNC than
>> > with SPICE.
>>
>> or headless. That worked 2 weeks ago already.
>>
>
> Thanks. Managed to get to VM console on VNC mode.
> Yet when trying to choose CD image there i am seeing the following py3
> error in vdsm.log:
>
> 2019-07-31 05:58:00,935-0400 INFO  (Thread-2) [vds.http.Server] Request
> handler for :::10.35.0.140:33459 started (http:306)
> 2019-07-31 05:58:00,936-0400 ERROR (Thread-2)
> [rpc.http.ImageRequestHandler] error during execution (http:253)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 154, in
> do_PUT
> httplib.LENGTH_REQUIRED)
>   File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 216, in
> _getIntHeader
> value = self._getRequiredHeader(headerName, missingError)
>   File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 221, in
> _getRequiredHeader
> value = self.headers.getheader(
> AttributeError: 'HTTPMessage' object has no attribute 'getheader'
>
> A quick look at py27 docs vs py36/py37 docs show that indeed the
> implementation of HTTPMessage is very different between those two.
> I will handle this and get back to you as soon as there's a fix available.
>
>
>
>
>> Once the current patches are merged it’s going to be far easier for
>> everyone to resolve the other remaining issues
>>
>
> I believe so as well, but its up to the gerrit committee  :)
>
>
>> >
>> >> No error message in vdsm.log.
>> >>
>> >> [image: image.png]
>> >>
>> >> On Mon, Jul 29, 2019 at 5:13 PM Amit Bawer  wrote:
>> >>
>> >>> I see. Since there are several patches on this topic. Please ping me
>> when
>> >>> its merged and I'll rebase the PoC branch.
>> >>> Thanks!
>> >>>
>> >>> On Mon, Jul 29, 2019 at 4:51 PM Marcin Sobczyk 
>> >>> wrote:
>> >>>
>> >>>>
>> >>>> On 7/29/19 3:40 PM, Amit Bawer wrote:
>> >>>>
>> >>>> Thanks Marcin.
>> >>>> I think we made a progress, former qemu spice TLS port error is now
>> gone
>> >>>> with this hook.
>> >>>>
>> >>>> Now its seems like py3 issue for hooks handling:
>> >>>>
>> >>>> Unfortunately it doesn't mean the hook actually worked - now you get
>> an
>> >>>> error probably a bit earlier, when trying to run the hook and never
>> get to
>> >>>> the previous place.
>> >>>> As I mentioned in the previous email you need my hook fixes for this
>> >>>> stuff to work.
>> >>>> You can do a quick and dirty fix by simply taking 'hooks.py' from
>> >>>> https://gerrit.ovirt.org/#/c/102049/ or rebase on top of the whole
>> >>>> 'py3-hooks' topic.
>> >>>>
>> >>>>
>> >>>>
>> >>>> 2019-07-29 09:29:54,981-0400 INFO  (vm/f62ae48a) [vds] prepared
>> volume
>> >>>> path: /rhev/data-center/mnt/10.35.0.
>> 136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65
>> >>>> (clientIF:501)
>> >>>> 2019-07-29 09:29:54,982-0400 INFO  (vm/f62ae48a) [virt.vm]
>> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Enabling drive
>> monitoring
>> >>>> (drivemonitor:56)
>> >>>> 2019-07-29 09:29:55,052-0400 WARN  (vm/f62ae48a) [root] Attempting
>> to a

[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-31 Thread Amit Bawer
On Wed, Jul 31, 2019 at 1:11 PM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> > On 30 Jul 2019, at 16:08, Milan Zamazal  wrote:
> >
> > Amit Bawer  writes:
> >
> >> Cherry-picked (locally)  'py3-hooks' pending gerrit patches on top of
> >> 'py3_poc' branch.
> >>
> >> Able to start VM ,
> >
> > Cool!
> >
> >> but cannot connect graphics console - when trying it shows blank
> >> screen with "connecting to graphics sever" and nothing happens.
> >
> > Did you try it with VNC console?  There is better chance with VNC than
> > with SPICE.
>
> or headless. That worked 2 weeks ago already.
>

Thanks. Managed to get to VM console on VNC mode.
Yet when trying to choose CD image there i am seeing the following py3
error in vdsm.log:

2019-07-31 05:58:00,935-0400 INFO  (Thread-2) [vds.http.Server] Request
handler for :::10.35.0.140:33459 started (http:306)
2019-07-31 05:58:00,936-0400 ERROR (Thread-2)
[rpc.http.ImageRequestHandler] error during execution (http:253)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 154, in
do_PUT
httplib.LENGTH_REQUIRED)
  File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 216, in
_getIntHeader
value = self._getRequiredHeader(headerName, missingError)
  File "/usr/lib/python3.6/site-packages/vdsm/rpc/http.py", line 221, in
_getRequiredHeader
value = self.headers.getheader(
AttributeError: 'HTTPMessage' object has no attribute 'getheader'


> Once the current patches are merged it’s going to be far easier for
> everyone to resolve the other remaining issues
>

I believe so as well, but its up to the gerrit committee  :)


> >
> >> No error message in vdsm.log.
> >>
> >> [image: image.png]
> >>
> >> On Mon, Jul 29, 2019 at 5:13 PM Amit Bawer  wrote:
> >>
> >>> I see. Since there are several patches on this topic. Please ping me
> when
> >>> its merged and I'll rebase the PoC branch.
> >>> Thanks!
> >>>
> >>> On Mon, Jul 29, 2019 at 4:51 PM Marcin Sobczyk 
> >>> wrote:
> >>>
> >>>>
> >>>> On 7/29/19 3:40 PM, Amit Bawer wrote:
> >>>>
> >>>> Thanks Marcin.
> >>>> I think we made a progress, former qemu spice TLS port error is now
> gone
> >>>> with this hook.
> >>>>
> >>>> Now its seems like py3 issue for hooks handling:
> >>>>
> >>>> Unfortunately it doesn't mean the hook actually worked - now you get
> an
> >>>> error probably a bit earlier, when trying to run the hook and never
> get to
> >>>> the previous place.
> >>>> As I mentioned in the previous email you need my hook fixes for this
> >>>> stuff to work.
> >>>> You can do a quick and dirty fix by simply taking 'hooks.py' from
> >>>> https://gerrit.ovirt.org/#/c/102049/ or rebase on top of the whole
> >>>> 'py3-hooks' topic.
> >>>>
> >>>>
> >>>>
> >>>> 2019-07-29 09:29:54,981-0400 INFO  (vm/f62ae48a) [vds] prepared volume
> >>>> path: /rhev/data-center/mnt/10.35.0.
> 136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65
> >>>> (clientIF:501)
> >>>> 2019-07-29 09:29:54,982-0400 INFO  (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Enabling drive
> monitoring
> >>>> (drivemonitor:56)
> >>>> 2019-07-29 09:29:55,052-0400 WARN  (vm/f62ae48a) [root] Attempting to
> add
> >>>> an existing net user: ovirtmgmt/f62ae48a-4e6f-4763-9a66-48e04708a2b5
> >>>> (libvirtnetwork:192)
> >>>> 2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'hdc' path:
> 'file=' ->
> >>>> '*file=' (storagexml:333)
> >>>> 2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
> >>>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'vda' path:
> >>>>
> 'file=/rhev/data-center/dab8cf3a-a969-11e9-84eb-080027624b78/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
> >>>> -> '*file=/rhev/data-cen

[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-29 Thread Amit Bawer
I see. Since there are several patches on this topic. Please ping me when
its merged and I'll rebase the PoC branch.
Thanks!

On Mon, Jul 29, 2019 at 4:51 PM Marcin Sobczyk  wrote:

>
> On 7/29/19 3:40 PM, Amit Bawer wrote:
>
> Thanks Marcin.
> I think we made a progress, former qemu spice TLS port error is now gone
> with this hook.
>
> Now its seems like py3 issue for hooks handling:
>
> Unfortunately it doesn't mean the hook actually worked - now you get an
> error probably a bit earlier, when trying to run the hook and never get to
> the previous place.
> As I mentioned in the previous email you need my hook fixes for this stuff
> to work.
> You can do a quick and dirty fix by simply taking 'hooks.py' from
> https://gerrit.ovirt.org/#/c/102049/ or rebase on top of the whole
> 'py3-hooks' topic.
>
>
>
> 2019-07-29 09:29:54,981-0400 INFO  (vm/f62ae48a) [vds] prepared volume
> path: 
> /rhev/data-center/mnt/10.35.0.136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65
> (clientIF:501)
> 2019-07-29 09:29:54,982-0400 INFO  (vm/f62ae48a) [virt.vm]
> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Enabling drive monitoring
> (drivemonitor:56)
> 2019-07-29 09:29:55,052-0400 WARN  (vm/f62ae48a) [root] Attempting to add
> an existing net user: ovirtmgmt/f62ae48a-4e6f-4763-9a66-48e04708a2b5
> (libvirtnetwork:192)
> 2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'hdc' path: 'file=' ->
> '*file=' (storagexml:333)
> 2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'vda' path:
> 'file=/rhev/data-center/dab8cf3a-a969-11e9-84eb-080027624b78/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
> -> 
> '*file=/rhev/data-center/mnt/10.35.0.136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
> (storagexml:333)
> 2019-07-29 09:29:55,056-0400 ERROR (vm/f62ae48a) [virt.vm]
> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') The vm start process failed
> (vm:841)
> Traceback (most recent call last):
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 775, in
> _startUnderlyingVm
> self._run()
>   File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2564, in
> _run
> final_callback=self._updateDomainDescriptor)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line 159,
> in before_vm_start
> raiseError=False, errors=errors)
>   File "/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line 79,
> in _runHooksDir
> os.write(data_fd, data or '')
> TypeError: a bytes-like object is required, not 'str'
>
> On Mon, Jul 29, 2019 at 4:16 PM Marcin Sobczyk 
> wrote:
>
>>
>> On 7/29/19 1:14 PM, Amit Bawer wrote:
>>
>> Reviving the mail-thread, checking for Non-TLS host-engine communication
>> resolution:
>>
>> Current master base for PoC RHEL8 host is:
>>
>> commit cfe7b11c71c1bf0dada89a8209c8d544b0d0f138 (vdsm-master/master)
>> Author: Marcin Sobczyk 
>> Date:   Fri Jul 12 12:54:57 2019 +0200
>>
>> When trying to "Run" VM on RHEL8 vdsm.log shows following failure trace:
>>
>> 9-07-29 06:58:49,140-0400 INFO  (vm/f62ae48a) [virt.vm]
>> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') > encoding='utf-8'?>
>> http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="
>> http://ovirt.org/vm/1.0"; type="kvm">
>> vm1
>> f62ae48a-4e6f-4763-9a66-48e04708a2b5
>> 1048576
>> 1048576
>> 1
>> 4194304
>> 16
>> 
>> 
>> oVirt
>> RHEL
>> 8.0-0.44.el8
>> > name="serial">e5825ba8-473e-4821-829a-bc6dbbe79617
>> > name="uuid">f62ae48a-4e6f-4763-9a66-48e04708a2b5
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> SandyBridge
>> 
>> 
>> 

[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-29 Thread Amit Bawer
Thanks Marcin.
I think we made a progress, former qemu spice TLS port error is now gone
with this hook.

Now its seems like py3 issue for hooks handling:

2019-07-29 09:29:54,981-0400 INFO  (vm/f62ae48a) [vds] prepared volume
path: 
/rhev/data-center/mnt/10.35.0.136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65
(clientIF:501)
2019-07-29 09:29:54,982-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Enabling drive monitoring
(drivemonitor:56)
2019-07-29 09:29:55,052-0400 WARN  (vm/f62ae48a) [root] Attempting to add
an existing net user: ovirtmgmt/f62ae48a-4e6f-4763-9a66-48e04708a2b5
(libvirtnetwork:192)
2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'hdc' path: 'file=' ->
'*file=' (storagexml:333)
2019-07-29 09:29:55,054-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') drive 'vda' path:
'file=/rhev/data-center/dab8cf3a-a969-11e9-84eb-080027624b78/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
-> 
'*file=/rhev/data-center/mnt/10.35.0.136:_exports_data/8a68eacc-0e0e-436a-bb25-c498c9f5f749/images/111de599-2afa-4dbb-9a99-3378ece66187/61e1e186-f289-4ffa-b59e-af90bde5db65'
(storagexml:333)
2019-07-29 09:29:55,056-0400 ERROR (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') The vm start process failed
(vm:841)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 775, in
_startUnderlyingVm
self._run()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2564, in
_run
final_callback=self._updateDomainDescriptor)
  File "/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line 159,
in before_vm_start
raiseError=False, errors=errors)
  File "/usr/lib/python3.6/site-packages/vdsm/common/hooks.py", line 79, in
_runHooksDir
os.write(data_fd, data or '')
TypeError: a bytes-like object is required, not 'str'

On Mon, Jul 29, 2019 at 4:16 PM Marcin Sobczyk  wrote:

>
> On 7/29/19 1:14 PM, Amit Bawer wrote:
>
> Reviving the mail-thread, checking for Non-TLS host-engine communication
> resolution:
>
> Current master base for PoC RHEL8 host is:
>
> commit cfe7b11c71c1bf0dada89a8209c8d544b0d0f138 (vdsm-master/master)
> Author: Marcin Sobczyk 
> Date:   Fri Jul 12 12:54:57 2019 +0200
>
> When trying to "Run" VM on RHEL8 vdsm.log shows following failure trace:
>
> 9-07-29 06:58:49,140-0400 INFO  (vm/f62ae48a) [virt.vm]
> (vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5')  encoding='utf-8'?>
> http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="
> http://ovirt.org/vm/1.0"; type="kvm">
> vm1
> f62ae48a-4e6f-4763-9a66-48e04708a2b5
> 1048576
> 1048576
> 1
> 4194304
> 16
> 
> 
> oVirt
> RHEL
> 8.0-0.44.el8
>  name="serial">e5825ba8-473e-4821-829a-bc6dbbe79617
> f62ae48a-4e6f-4763-9a66-48e04708a2b5
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> SandyBridge
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  path="/var/lib/libvirt/qemu/channels/f62ae48a-4e6f-4763-9a66-48e04708a2b5.ovirt-guest-agent.0"
> />
> 
> 
> 
>  path="/var/lib/libvirt/qemu/channels/f62ae48a-4e6f-4763-9a66-48e04708a2b5.org.qemu.guest_agent.0"
> />
> 
> 
> /dev/urandom
> 
> 
> 
>  vram="8192" />
> 
> 
>  passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc">
> 
> 
> 
> 
> 
> 
>  passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
>  

[ovirt-devel] Re: Jenkins CI: "Testing system error"

2019-07-29 Thread Amit Bawer
On Mon, Jul 29, 2019 at 4:03 PM Eyal Edri  wrote:

>
>
> On Mon, Jul 29, 2019 at 12:06 PM Amit Bawer  wrote:
>
>> Lately we are experiencing check-patch issues where CI check-patch
>> results with "Testing system error".[1]
>> Is there a planned action to resolve those issues?
>>
>> [1]
>> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/8995/artifact/ci_build_summary.html
>>
>
> Do you have another link that shows the same problem, that link is already
> cleaned from history.
> I do see the latest builds are successful.
> https://jenkins.ovirt.org/job/vdsm_standard-check-patch/
>

I have recent fails due to gerrit connectivity:
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/9061/console
might be relevant.


> It might be due to external fedora mirror as we've seen its not stable
>
>
>>
>> Thanks.
>>
>> ___
>> Devel mailing list -- devel@ovirt.org
>> To unsubscribe send an email to devel-le...@ovirt.org
>> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> List Archives:
>> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K5KJDEA6WY3IKLPMVXMDPDD2BJWMU2B4/
>>
>
>
> --
>
> Eyal edri
>
> He / Him / His
>
>
> MANAGER
>
> CONTINUOUS PRODUCTIZATION
>
> SYSTEM ENGINEERING
>
> Red Hat <https://www.redhat.com/>
> <https://red.ht/sig>
> phone: +972-9-7692018
> irc: eedri (on #tlv #rhev-dev #rhev-integ #cp-devel)
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/JCNJ3LWT5NWNBVDW43X2EFRBZ34LRZFY/


[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-29 Thread Amit Bawer
Reviving the mail-thread, checking for Non-TLS host-engine communication
resolution:

Current master base for PoC RHEL8 host is:

commit cfe7b11c71c1bf0dada89a8209c8d544b0d0f138 (vdsm-master/master)
Author: Marcin Sobczyk 
Date:   Fri Jul 12 12:54:57 2019 +0200

When trying to "Run" VM on RHEL8 vdsm.log shows following failure trace:

9-07-29 06:58:49,140-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') 
http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="
http://ovirt.org/vm/1.0"; type="kvm">
vm1
f62ae48a-4e6f-4763-9a66-48e04708a2b5
1048576
1048576
1
4194304
16


oVirt
RHEL
8.0-0.44.el8
e5825ba8-473e-4821-829a-bc6dbbe79617
f62ae48a-4e6f-4763-9a66-48e04708a2b5











SandyBridge

















/dev/urandom











































111de599-2afa-4dbb-9a99-3378ece66187







hvm





1024
4.4



dab8cf3a-a969-11e9-84eb-080027624b78

61e1e186-f289-4ffa-b59e-af90bde5db65

111de599-2afa-4dbb-9a99-3378ece66187

8a68eacc-0e0e-436a-bb25-c498c9f5f749

false
auto_resume



 (vm:2570)
2019-07-29 06:58:49,845-0400 ERROR (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') The vm start process failed
(vm:841)
Traceback (most recent call last):
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 775, in
_startUnderlyingVm
self._run()
  File "/usr/lib/python3.6/site-packages/vdsm/virt/vm.py", line 2575, in
_run
dom.createWithFlags(flags)
  File "/usr/lib/python3.6/site-packages/vdsm/common/libvirtconnection.py",
line 131, in wrapper
ret = f(*args, **kwargs)
  File "/usr/lib/python3.6/site-packages/vdsm/common/function.py", line 94,
in wrapper
return func(inst, *args, **kwargs)
  File "/usr/lib64/python3.6/site-packages/libvirt.py", line 1110, in
createWithFlags
if ret == -1: raise libvirtError ('virDomainCreateWithFlags() failed',
dom=self)
libvirt.libvirtError: unsupported configuration: Auto allocation of spice
TLS port requested but spice TLS is disabled in qemu.conf
2019-07-29 06:58:49,845-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Changed state to Down:
unsupported configuration: Auto allocation of spice TLS port requested but
spice TLS is disabled in qemu.conf (code=1) (vm:1595)
2019-07-29 06:58:49,875-0400 INFO  (vm/f62ae48a) [virt.vm]
(vmId='f62ae48a-4e6f-4763-9a66-48e04708a2b5') Stopping connection
(guestagent:455)
2019-07-29 06:58:49,875-0400 DEBUG (vm/f62ae48a) [jsonrpc.Notification]
Sending event {"jsonrpc": "2.0", "method":
"|virt|VM_status|f62ae48a-4e6f-4763-9a66-48e04708a2b5", "params":
{"f62ae48a-4e6f-4763-9a66-48e04708a2b5": {"status": "Down", "vmId":
"f62ae48a-4e6f-4763-9a66-48e04708a2b5", "exitCode": 1, "exitMessage":
"unsupported configuration: Auto allocation of spice TLS port requested but
spice TLS is disabled in qemu.conf", "exitReason": 1}, "notify_time":
4883259290}} (__init__:181)






On Wed, Jul 24, 2019 at 12:09 PM Amit Bawer  wrote:

>
>
> On Wed, Jul 24, 2019 at 12:02 PM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 24 Jul 2019, at 10:36, Amit Bawer  wrote:
>>
>> Per +Milan Zamazal   comment, adding +devel
>> 
>>
>> On Wed, Jul 24, 2019 at 11:32 AM Michal Skrivanek <
>> michal.skriva...@redhat.com> wrote:
>>
>>>
>>>
>>> On 24 Jul 2019, at 10:24, Amit Bawer  wrote:
>>>
>>> Thanks, applied the fixed patch.
>>>
>>> No I am punished for choosing not to work with SSL/TLS in Vdsm when
>>> trying to "Run" VM.
>>> - Any known workaround for this?
>>>
>>>
>> yes, vdsm-tool reconfigure
&

[ovirt-devel] Jenkins CI: "Testing system error"

2019-07-29 Thread Amit Bawer
Lately we are experiencing check-patch issues where CI check-patch results
with "Testing system error".[1]
Is there a planned action to resolve those issues?

[1]
https://jenkins.ovirt.org/job/vdsm_standard-check-patch/8995/artifact/ci_build_summary.html

Thanks.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/K5KJDEA6WY3IKLPMVXMDPDD2BJWMU2B4/


[ovirt-devel] Re: Vdsm NFS on RHEL 8

2019-07-24 Thread Amit Bawer
On Wed, Jul 24, 2019 at 12:02 PM Michal Skrivanek <
michal.skriva...@redhat.com> wrote:

>
>
> On 24 Jul 2019, at 10:36, Amit Bawer  wrote:
>
> Per +Milan Zamazal   comment, adding +devel
> 
>
> On Wed, Jul 24, 2019 at 11:32 AM Michal Skrivanek <
> michal.skriva...@redhat.com> wrote:
>
>>
>>
>> On 24 Jul 2019, at 10:24, Amit Bawer  wrote:
>>
>> Thanks, applied the fixed patch.
>>
>> No I am punished for choosing not to work with SSL/TLS in Vdsm when
>> trying to "Run" VM.
>> - Any known workaround for this?
>>
>>
> yes, vdsm-tool reconfigure
>

"vdsm-tool reconfigure"  is not a valid option.


>
>> That’s part of the ongoing fixes, please don’t discuss this privately,
>> this belongs to devel@ list.
>> Many people are struggling with the same issues while they’re working on
>> their areas, and we need complete visibility
>>
>>
>> 24 04:04:54,610-0400 INFO  (vm/01de706d) [virt.vm]
>> (vmId='01de706d-ee4c-484f-a17f-6b3355adf047') > encoding='utf-8'?>
>> http://ovirt.org/vm/tune/1.0"; xmlns:ovirt-vm="
>> http://ovirt.org/vm/1.0"; type="kvm">
>> vm1
>> 01de706d-ee4c-484f-a17f-6b3355adf047
>> 1048576
>> 1048576
>> 1
>> 4194304
>> 16
>> 
>> 
>> oVirt
>> RHEL
>> 8.0-0.44.el8
>> > name="serial">e5825ba8-473e-4821-829a-bc6dbbe79617
>> > name="uuid">01de706d-ee4c-484f-a17f-6b3355adf047
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> SandyBridge
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > path="/var/lib/libvirt/qemu/channels/01de706d-ee4c-484f-a17f-6b3355adf047.ovirt-guest-agent.0"
>> />
>> 
>> 
>> 
>> > path="/var/lib/libvirt/qemu/channels/01de706d-ee4c-484f-a17f-6b3355adf047.org.qemu.guest_agent.0"
>> />
>> 
>> 
>> /dev/urandom
>> 
>> 
>> > passwdValidTo="1970-01-01T00:00:01" port="-1" tlsPort="-1" type="spice">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > passwdValidTo="1970-01-01T00:00:01" port="-1" type="vnc">
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > vram="8192" />
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> > name="qemu" type="raw" />
>> 
>> > unit="0" />
>> 
>> 7dee6442-1838-48dd-892a-86fb96a85737
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> hvm
>> 
>> 
>> 
>> 
>> 
>> > type="int">1024
>> 4.4
>> 
>> 
>>
>> dab8cf3a-a969-11e9-84eb-080027624b78
>>
>> da070fc0-4af5-406e-bf2b-2cf4d89eb276
>>
>> 7dee6442-1838-48dd-892a-86fb96a85737
>>
>> f49de997-9fb7-4ef8-82bd-f5b97ba31fb0
>> 
>> false
>> auto_resume
>> 
>> 
>> 
>>  (vm:2570)
>> 2019-07-24 04:04:55,348-0400 ERROR (vm/01de706d) [virt.vm]
>> (vmId='01de706d-ee4c-484f-a17f-6b3355adf047') The vm start process failed
>> (vm:841)
>> Traceback (most recent call last):
&

[ovirt-devel] Re: Why filetransaction needs to encode the content to utf-8?

2019-07-23 Thread Amit Bawer
Not sure if its helpful, but didn't see any other reply, so anyway:

Change in [1] assumes Unicode Sandwich [5] where the given content is
assumed to be already encoded into utf-8 string (unicode object in py2, str
object in py3, six.text_type for both py versions) and then encoded back
into utf-8 bytes when its time to write it back. In your case the
certificate contents was read as plain str in python 2 which is by default
assumed to have ASCII encoding, so the top bread slice of the sandwich was
missing and it got the jam spilling (exception).

Line of [1] is mostly for sake of python 3, where we cannot treat bytes and
strs the same any more since default encoding for py3 strs is unicode and
not ascii like py2. So if you remove [1] you'll probably make some problems
for py3 (like comparing bytes with strings, TypeErrors, etc).

[5]
https://stackoverflow.com/questions/21129020/how-to-fix-unicodedecodeerror-ascii-codec-cant-decode-byte


On Tue, Jul 23, 2019 at 1:21 PM Yedidyah Bar David  wrote:

> Hi Nir and all,
>
> In [1] you added line 151, to encode the contents to utf-8. Do you
> remember why you needed that? What happens if I remove this line?
>
> I am working on [2]. It fails on that line, because the current
> content, if organization name is unicode, has a UTF-8 encoded string
> already, but is a python str (not unicode). Tried patching otopi [3],
> did a few attempts (some of them also pushed there, check the
> different patchsets), but none worked. So I am going to patch
> postinstall file generation instead [4], but I don't like this.
>
> Any hints are welcome. Thanks and best regards,
>
> [1] https://gerrit.ovirt.org/#/c/92435/1/src/otopi/filetransaction.py
>
> [2] https://bugzilla.redhat.com/show_bug.cgi?id=1729511
>
> [3] https://gerrit.ovirt.org/102085
>
> [4] https://gerrit.ovirt.org/102089
> --
> Didi
> ___
> Devel mailing list -- devel@ovirt.org
> To unsubscribe send an email to devel-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/devel@ovirt.org/message/3JZCK5POIBNEDIF2I7ABDD3VNOLZOUK3/
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V5Z5FRTYRL26JRGSYD6VCHG74SUQT5EC/


[ovirt-devel] Re: make rpm for vdsm on CentOS Linux release 7.6.1810 (Core)

2019-05-16 Thread Amit Bawer
Thanks, I'll check it out.

On Wed, May 15, 2019 at 6:02 PM Nir Soffer  wrote:

> On Wed, May 15, 2019 at 4:28 PM Amit Bawer  wrote:
>
>> Hi,
>>
>> I am trying to follow the basic procedure described here
>> https://github.com/oVirt/vdsm
>>
>
> This is not a good place to look for the instructions. The right place is:
> https://ovirt.org/develop/developer-guide/vdsm/developers.html
>
>
>> to build vdsm rpm from source on my CentOS Linux release 7.6.1810 (Core)
>> host.
>> had to add some dependencies on the way ofcourse, so at that point i am
>> able to make most of it except for python error described below resulting
>> from the "make rpm" part of procedure.
>>
>> Am I missing something?
>>
>
> This is a good topic for devel, adding.
>
>
>>
>> Thanks
>>
>> make[1]: Entering directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0'
>> Makefile:1002: warning: overriding recipe for target `check-recursive'
>> Makefile:533: warning: ignoring old recipe for target `check-recursive'
>> Making all in contrib
>> make[2]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/contrib'
>> make[2]: Nothing to be done for `all'.
>> make[2]: Leaving directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/contrib'
>> Making all in helpers
>> make[2]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/helpers'
>> make[2]: Nothing to be done for `all'.
>> make[2]: Leaving directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/helpers'
>> Making all in init
>> make[2]: Entering directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init'
>> Making all in systemd
>> make[3]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init/systemd'
>>   MKDIR_P ./
>>   SED vdsm-tmpfiles.d.conf
>> make[3]: Leaving directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init/systemd'
>> make[3]: Entering directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init'
>>   MKDIR_P ./
>>   SED vdsmd_init_common.sh
>> make[3]: Leaving directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init'
>> make[2]: Leaving directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/init'
>> Making all in lib
>> make[2]: Entering directory `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/lib'
>> Making all in sos
>> make[3]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/lib/sos'
>>   MKDIR_P ./
>>   SED vdsm.py
>> make[3]: Leaving directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/lib/sos'
>> Making all in vdsm
>> make[3]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/lib/vdsm'
>> Making all in api
>> make[4]: Entering directory
>> `/home/abawer/rpmbuild/BUILD/vdsm-4.40.0/lib/vdsm/api'
>>   Generate vdsm-api.html
>> chmod u+w .
>> PYTHONPATH=./../../:./../../vdsm \
>> python2.7 ./schema_to_html.py vdsm-api ./vdsm-api.html
>> Traceback (most recent call last):
>>   File "./schema_to_html.py", line 250, in 
>> main()
>>   File "./schema_to_html.py", line 240, in main
>>
>> *choices=[st.value for st in vdsmapi.SchemaType])TypeError: 'type'
>> object is not iterable*
>>
>
> Never had this error.
>
> After you prepare the host as described in:
>
> https://ovirt.org/develop/developer-guide/vdsm/developers.html#installing-the-required-repositories
>
> https://ovirt.org/develop/developer-guide/vdsm/developers.html#getting-the-source
>
> https://ovirt.org/develop/developer-guide/vdsm/developers.html#installing-the-required-packages
>
> is to install latest release of vdsm:
>
> yum install vdsm vdsm-client
>
> Otherwise you will have to painfully install some of the packages or
> install lot of packages you
> don't need (e.g. mostly lot of hooks).
>
> When your host is ready, you can build vdsm from source and upgrade the
> installed packages:
>
> git clean -dxf
> ./autogen.sh --system --enable-timestamp
> make
> rm -rf ~/rpmbuild
> make rpm
> (cd ~/rpmbuild/RPMS && sudo yum upgrade */*.rpm)
>
> I think we should drop instructions in vdsm README, and point to the page
> on ovirt.org,
> or replace them with short version that works and easier to maintain.
>
> Nir
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/LRK6A2TXDESXIQXWWAFCE5K7BPBB7LO3/


[ovirt-devel] Re: Problem on host deployment from engine

2019-04-16 Thread Amit Bawer
To refine:

1. Yes - had to install rpms over the host machine itself before being able
to actually deploy it from the engine.

2. Followed the "python2" packages installation guidelines.

On Tue, Apr 16, 2019 at 10:00 AM Yedidyah Bar David  wrote:

> On Mon, Apr 15, 2019 at 3:14 PM Nir Soffer  wrote:
> >
> >
> >
> > On Mon, Apr 15, 2019, 14:59 Amit Bawer  wrote:
> >>
> >> Hello Didi & Sandro,
> >>
> >> I have encountered following issue when attempting to deploy a host
> from the engine management.
> >>
> >> Engine: Fedora 28
> >> Host: CentOS 7.6.1810
> >>
> >> engine.log error:
> >>
> >> 2019-04-14 14:08:35,578+03 INFO
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (EE-ManagedThreadFactory-engine-Thread-2) [628278a7] SSH execute '
> root@10.35.0.229' 'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}" mktemp
> -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" > /dev/null
> 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar --warning=no-timestamp
> -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
> DIALOG/dialect=str:machine DIALOG/customization=bool:True'
> >> 2019-04-14 14:08:35,670+03 ERROR
> [org.ovirt.engine.core.dal.dbbroker.auditloghandling.AuditLogDirector]
> (VdsDeploy) [628278a7] EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An
> error has occurred during installation of Host host1: Python is required
> but missing.
> >> 2019-04-14 14:08:35,688+03 ERROR
> [org.ovirt.engine.core.uutils.ssh.SSHDialog]
> (EE-ManagedThreadFactory-engine-Thread-2) [628278a7] SSH error running
> command root@10.35.0.229:'umask 0077; MYTMP="$(TMPDIR="${OVIRT_TMPDIR}"
> mktemp -d -t ovirt-XX)"; trap "chmod -R u+rwX \"${MYTMP}\" >
> /dev/null 2>&1; rm -fr \"${MYTMP}\" > /dev/null 2>&1" 0; tar
> --warning=no-timestamp -C "${MYTMP}" -x &&  "${MYTMP}"/ovirt-host-deploy
> DIALOG/dialect=str:machine DIALOG/customization=bool:True': IOException:
> Command returned failure code 1 during SSH session 'root@10.35.0.229'
> >>
> >>
> >> I managed to resolve it manually by installing following rpms over the
> host machine before reattempting to deploy again the host from the engine
> management:
> >>
> >> python2-otopi
> >> python2-ovirt-host-deploy
>
> On the _host_ (the hypervisor, the machine you want to add as a host
> to the engine), or on the _engine_ machine?
>
> If former, please try latter and report. Thanks.
>
> You should not need to install anything on the host machine. If you
> do, that's a bug.
>

> For adding a python2 host, you need python2 otopi/host-deploy packages
> on the engine machine. That's expected. If that's your only issue, and
> we want to "solve" it, we have two options. I personally do not have a
> strong preference:
>
> 1. Merely document this somewhere (in engine's README or whatever)
> 2. Make the engine require both python2-otopi and python3-otopi
>
> Latter option will very cleanly and easily solve your current problem,
> but will not be possible once we support the engine on an OS that has
> only python3 - perhaps fedora 31 or so, see e.g.:
>
> https://fedoraproject.org/wiki/Changes/Mass_Python_2_Package_Removal

Ideas/opinions are welcome.
>
> Best regards,
> --
> Didi
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/V2SVUSODKWTGMZJLQZQO2AJDWEUPKM6W/