[ovirt-devel] Re: [ovirt-users] Re: Removal of dpdk

2020-11-04 Thread Florian Schmid via Devel


Hi Dominik, 




I don't have any concerns, I only wanted to know, why it will be removed, 
because it can increase network performance a lot. 

Thank you very much for your explanation. 




I fully understand, that you want to remove features, when nobody is using 
them. 




BR Florian 


Von: "Dominik Holler"  
An: "Florian Schmid"  
CC: "Ales Musil" , "users" , "devel" 
 
Gesendet: Mittwoch, 4. November 2020 12:15:47 
Betreff: Re: [ovirt-users] Re: Removal of dpdk 

Hi Florian, 
thanks for your thoughts! 

On Tue, Nov 3, 2020 at 3:21 PM Florian Schmid via Users < [ 
mailto:us...@ovirt.org | us...@ovirt.org ] > wrote: 



Hi Ales, 

what do you mean with "not maintained for a long time"? 



The oVirt integration of dpdk was not maintained. 

BQ_BEGIN

DPDK is heavily developed and make the linux network extremely fast. 

I don't think, that SR-IOV can replace it, 

BQ_END

The removal of dpdk is about removing the dpdk support from oVirt hosts only. 
We wonder if there is someone using dpdk to attach oVirt VMs to physical NICs. 
We are aware that many users use SR-IOV, especially for scenarios of enabling a 
high count of Ethernet frames for VMs or requiring a low latency, 
but we are not aware of users using dpdk to connect the oVirt VMs to the 
physical NICs of the host. 

BQ_BEGIN

because packets must be still processed by the kernel, which is really slow and 
CPU demanding. 

BQ_END


In SR-IOV the packets might be processed by the guest kernel, not but the host 
kernel. 
oVirt is focused on the host kernel, while the guest OS is managed by the user 
of oVirt. 

Did this explanation address your concerns? 


BQ_BEGIN



BR Florian 


Von: "Ales Musil" < [ mailto:amu...@redhat.com | amu...@redhat.com ] > 
An: "Nir Soffer" < [ mailto:nsof...@redhat.com | nsof...@redhat.com ] > 
CC: "users" < [ mailto:us...@ovirt.org | us...@ovirt.org ] >, "devel" < [ 
mailto:devel@ovirt.org | devel@ovirt.org ] > 
Gesendet: Dienstag, 3. November 2020 13:56:12 
Betreff: [ovirt-users] Re: Removal of dpdk 



On Tue, Nov 3, 2020 at 1:52 PM Nir Soffer < [ mailto:nsof...@redhat.com | 
nsof...@redhat.com ] > wrote: 

BQ_BEGIN

On Tue, Nov 3, 2020 at 1:07 PM Ales Musil < [ mailto:amu...@redhat.com | 
amu...@redhat.com ] > wrote: 

BQ_BEGIN

Hello, 
we have decided to remove dpdk in the upcoming version of oVirt namely 4.4.4. 
Let us know if there are any concerns about this. 

BQ_END

Can you give more info why we want to remove this feature, and what is 
the replacement for existing users? 

Nir 

BQ_END


Sure, 
the feature was only experimental and not maintained for a long time. The 
replacement is to use SR-IOV 
which is supported by oVirt. 

Thanks, 
Ales 


-- 


Ales Musil 

Software Engineer - RHV Network 

[ https://www.redhat.com/ | Red Hat EMEA ] 


[ mailto:amu...@redhat.com | amu...@redhat.com ] IM: amusil 
[ https://red.ht/sig |   ] 

___ 
Users mailing list -- [ mailto:us...@ovirt.org | us...@ovirt.org ] 
To unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] 
Privacy Statement: [ https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] 
oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] 
List Archives: [ 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3FHIRQKEEKLGWLMSPHEJ3LOV3LPQZXPA/
 | 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3FHIRQKEEKLGWLMSPHEJ3LOV3LPQZXPA/
 ] 
___ 
Users mailing list -- [ mailto:us...@ovirt.org | us...@ovirt.org ] 
To unsubscribe send an email to [ mailto:users-le...@ovirt.org | 
users-le...@ovirt.org ] 
Privacy Statement: [ https://www.ovirt.org/privacy-policy.html | 
https://www.ovirt.org/privacy-policy.html ] 
oVirt Code of Conduct: [ 
https://www.ovirt.org/community/about/community-guidelines/ | 
https://www.ovirt.org/community/about/community-guidelines/ ] 
List Archives: [ 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/ZCAYTHVZPOZF3MCJB3NMCIU467M33NMR/
 | 
https://lists.ovirt.org/archives/list/us...@ovirt.org/message/ZCAYTHVZPOZF3MCJB3NMCIU467M33NMR/
 ] 


BQ_END


___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/GRY6LC4F7WZQZVLNR6GFAYYXSOMI7J7F/


[ovirt-devel] Re: Libvirt driver iothread property for virtio-scsi disks

2020-11-04 Thread Sergio Lopez
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
> 
> - The optional iothread attribute assigns the disk to an IOThread as defined 
> by
>   the range for the domain iothreads value. Multiple disks may be assigned to
>   the same IOThread and are numbered from 1 to the domain iothreads value.
>   Available for a disk device target configured to use "virtio" bus and "pci"
>   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> 
> Does it mean that virtio-scsi disks do not use iothreads?

virtio-scsi disks can use iothreads, but they are configured in the
scsi controller, not in the disk itself. All disks attached to the
same controller will share the same iothread, but you can also attach
multiple controllers.

> I'm experiencing a horrible performance using nested vms (up to 2 levels of
> nesting) when accessing NFS storage running on one of the VMs. The NFS
> server is using scsi disk.
> 
> My theory is:
> - Writing to NFS server is very slow (too much nesting, slow disk)
> - Not using iothreads (because we don't use virtio?)
> - Guest CPU is blocked by slow I/O

I would discard the lack of iothreads as the culprit. They do improve
the performance, but without them the performance should be quite
decent anyway. Probably something else is causing the trouble.

I would do a step by step analysis, testing the NFS performance from
outside the VM first, and then elaborating upwards from that.

Sergio.

> Does this make sense?
> 
> [1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms
> 
> Nir
> 


signature.asc
Description: PGP signature
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/SNWJ6AY4ENRZRHPEOMHDZZASMDOEFWRZ/


[ovirt-devel] Building lago from source

2020-11-04 Thread Nir Soffer
I'm trying to test with ost:
https://github.com/lago-project/lago/pull/815

So clone the project on the ost vm and built rpms:

make
make rpm

The result is:
lago-1.0.2-1.el8.noarch.rpm  python3-lago-1.0.2-1.el8.noarch.rpm

But the lago version installed by setup_for_ost.sh is:
$ rpm -q lago
lago-1.0.11-1.el8.noarch

I tried to install lago from master, and then lago_init fail:

$ lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
usage: lago [-h] [-l {info,debug,error,warning}] [--logdepth LOGDEPTH]
[--version] [--out-format {default,flat,json,yaml}]
[--prefix-path PREFIX_PATH] [--workdir-path WORKDIR_PATH]
[--prefix-name PREFIX_NAME] [--ssh-user SSH_USER]
[--ssh-password SSH_PASSWORD] [--ssh-tries SSH_TRIES]
[--ssh-timeout SSH_TIMEOUT] [--libvirt_url LIBVIRT_URL]
[--libvirt-user LIBVIRT_USER]
[--libvirt-password LIBVIRT_PASSWORD]
[--default_vm_type DEFAULT_VM_TYPE]
[--default_vm_provider DEFAULT_VM_PROVIDER]
[--default_root_password DEFAULT_ROOT_PASSWORD]
[--lease_dir LEASE_DIR] [--reposync-dir REPOSYNC_DIR]
[--ignore-warnings]
VERB ...
lago: error: unrecognized arguments: --ssh-key
/home/nsoffer/src/ovirt-system-tests/deployment-basic-suite-master
/home/nsoffer/src/ovirt-system-tests/basic-suite-master/LagoInitFile

Do we use a customized lago version for ost? Where is the source?

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/4W3OCPR72L2GXIDDIR5QRKLK4JWGUY67/


[ovirt-devel] Testing image transfer and backup with OST environment

2020-11-04 Thread Nir Soffer
I want to share useful info from the OST hackathon we had this week.

Image transfer must work with real hostnames to allow server
certificate verification.
Inside the OST environment, engine and hosts names are resolvable, but
on the host
(or vm) running OST, the names are not available.

This can be fixed by adding the engine and hosts to /etc/hosts like this:

$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.200.2 engine
192.168.200.3 lago-basic-suite-master-host-0
192.168.200.4 lago-basic-suite-master-host-1

It would be if this was automated by OST. You can get the details using:

$ cd src/ovirt-system-tests/deployment-xxx
$ lago status

OST keeps the deployment directory in the source directory. Be careful if you
like to "git clean -dxf' since it will delete all the deployment and
you will have to
kill the vms manually later.

The next thing we need is the engine ca cert. It can be fetched like this:

$ curl -k 
'https://engine/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'
> ca.pem

I would expect OST to do this and put the file in the deployment directory.

To upload or download images, backup vms or use other modern examples from
the sdk, you need to have a configuration file like this:

$ cat ~/.config/ovirt.conf
[engine]
engine_url = https://engine
username = admin@internal
password = 123
cafile = ca.pem

With this uploading from the same directory where ca.pem is located
will work. If you want
it to work from any directory, use absolute path to the file.

I created a test image using qemu-img and qemu-io:

$ qemu-img create -f qcow2 test.qcow2 1g

To write some data to the test image we can use qemu-io. This writes 64k of data
(b"\xf0" * 64 * 1024) to offset 1 MiB.

$ qemu-io -f qcow2 -c "write -P 240 1m 64k" test.qcow2

Since this image contains only 64k of data, uploading it should be instant.

The last part we need is the imageio client package:

$ dnf install ovirt-imageio-client

To upload the image, we need at least one host up and storage domains
created. I did not find a way to prepare OST, so simply run this after
run_tests completed. It took about an hour.

To upload the image to raw sparse disk we can use:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
-c engine --sd-name nfs --disk-sparse --disk-format raw test.qcow2
[   0.0 ] Checking image...
[   0.0 ] Image format: qcow2
[   0.0 ] Disk format: raw
[   0.0 ] Disk content type: data
[   0.0 ] Disk provisioned size: 1073741824
[   0.0 ] Disk initial size: 1073741824
[   0.0 ] Disk name: test.raw
[   0.0 ] Disk backup: False
[   0.0 ] Connecting...
[   0.0 ] Creating disk...
[  36.3 ] Disk ID: 26df08cf-3dec-47b9-b776-0e2bc564b6d5
[  36.3 ] Creating image transfer...
[  38.2 ] Transfer ID: de8cfac9-ead2-4304-b18b-a1779d647716
[  38.2 ] Transfer host name: lago-basic-suite-master-host-1
[  38.2 ] Uploading image...
[ 100.00% ] 1.00 GiB, 1.79 seconds, 571.50 MiB/s
[  40.0 ] Finalizing image transfer...
[  44.1 ] Upload completed successfully

I uploaded this before I added the hosts to /etc/hosts, so the upload
was done via the proxy.

Yes, it took 36 seconds to create the disk.

To download the disk use:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
-c engine 5ac63c72-6296-46b1-a068-b1039c8ecbd1 downlaod.qcow2
[   0.0 ] Connecting...
[   0.2 ] Creating image transfer...
[   1.6 ] Transfer ID: a99e2a43-8360-4661-81dc-02828a88d586
[   1.6 ] Transfer host name: lago-basic-suite-master-host-1
[   1.6 ] Downloading image...
[ 100.00% ] 1.00 GiB, 0.32 seconds, 3.10 GiB/s
[   1.9 ] Finalizing image transfer...

We can verify the transfers using checksums. Here we create a checksum
of the remote
disk:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_disk.py
-c engine 26df08cf-3dec-47b9-b776-0e2bc564b6d5
{
"algorithm": "blake2b",
"block_size": 4194304,
"checksum":
"a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
}

And checksum of the downloaded image - they should match:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/checksum_image.py
downlaod.qcow2
{
  "algorithm": "blake2b",
  "block_size": 4194304,
  "checksum": "a79a1efae73484e0218403e6eb715cdf109c8e99c2200265b779369339cf347b"
}

Same upload to iscsi domain, using qcow2 format:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/upload_disk.py
-c engine --sd-name iscsi --disk-sparse --disk-format qcow2 test.qcow2
[   0.0 ] Checking image...
[   0.0 ] Image format: qcow2
[   0.0 ] Disk format: cow
[   0.0 ] Disk content type: data
[   0.0 ] Disk provisioned size: 1073741824
[   0.0 ] Disk initial size: 458752
[   0.0 ] Disk name: test.qcow2
[   0.0 ] Disk backup: False
[   0.0 ] Connecting...
[   0.0 ] Creating disk...
[  27.8 ] Disk ID: e7ef253e-7baa-4d4a-a9b2-1a6b7db13f41
[  27.8 ] Creating image 

[ovirt-devel] Re: Libvirt driver iothread property for virtio-scsi disks

2020-11-04 Thread Nir Soffer
On Wed, Nov 4, 2020 at 6:54 PM Daniel P. Berrangé  wrote:
>
> On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> > The docs[1] say:
> >
> > - The optional iothread attribute assigns the disk to an IOThread as 
> > defined by
> >   the range for the domain iothreads value. Multiple disks may be assigned 
> > to
> >   the same IOThread and are numbered from 1 to the domain iothreads value.
> >   Available for a disk device target configured to use "virtio" bus and 
> > "pci"
> >   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> >
> > Does it mean that virtio-scsi disks do not use iothreads?
> >
> > I'm experiencing a horrible performance using nested vms (up to 2 levels of
> > nesting) when accessing NFS storage running on one of the VMs. The NFS
> > server is using scsi disk.
>
> When you say  2 levels of nesting do you definitely have KVM enabled at
> all levels, or are you ending up using TCG emulation, because the latter
> would certainly explain terrible performance.

Good point, I'll check that out, thanks.

> > My theory is:
> > - Writing to NFS server is very slow (too much nesting, slow disk)
> > - Not using iothreads (because we don't use virtio?)
> > - Guest CPU is blocked by slow I/O
>
> Regards,
> Daniel
> --
> |: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
> |: https://libvirt.org -o-https://fstop138.berrange.com :|
> |: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/5MPVMKSTKVC4GP7DSXSZ5DUQ2RRWMUUG/


[ovirt-devel] Re: Libvirt driver iothread property for virtio-scsi disks

2020-11-04 Thread Nir Soffer
On Wed, Nov 4, 2020 at 6:42 PM Sergio Lopez  wrote:
>
> On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> > The docs[1] say:
> >
> > - The optional iothread attribute assigns the disk to an IOThread as 
> > defined by
> >   the range for the domain iothreads value. Multiple disks may be assigned 
> > to
> >   the same IOThread and are numbered from 1 to the domain iothreads value.
> >   Available for a disk device target configured to use "virtio" bus and 
> > "pci"
> >   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> >
> > Does it mean that virtio-scsi disks do not use iothreads?
>
> virtio-scsi disks can use iothreads, but they are configured in the
> scsi controller, not in the disk itself. All disks attached to the
> same controller will share the same iothread, but you can also attach
> multiple controllers.

Thanks, I found that we do use this in ovirt:


  
  
  


However the VMs in this setup are not created by oVirt, but manually using
libvirt. I'll make sure we configure the controller in the same way.

> > I'm experiencing a horrible performance using nested vms (up to 2 levels of
> > nesting) when accessing NFS storage running on one of the VMs. The NFS
> > server is using scsi disk.
> >
> > My theory is:
> > - Writing to NFS server is very slow (too much nesting, slow disk)
> > - Not using iothreads (because we don't use virtio?)
> > - Guest CPU is blocked by slow I/O
>
> I would discard the lack of iothreads as the culprit. They do improve
> the performance, but without them the performance should be quite
> decent anyway. Probably something else is causing the trouble.
>
> I would do a step by step analysis, testing the NFS performance from
> outside the VM first, and then elaborating upwards from that.

Makes sense, thanks.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/APDCE7FA5MAPVAFSLG6BFBNJBWDBU2SO/


[ovirt-devel] Re: Libvirt driver iothread property for virtio-scsi disks

2020-11-04 Thread Daniel P . Berrangé
On Wed, Nov 04, 2020 at 05:48:40PM +0200, Nir Soffer wrote:
> The docs[1] say:
> 
> - The optional iothread attribute assigns the disk to an IOThread as defined 
> by
>   the range for the domain iothreads value. Multiple disks may be assigned to
>   the same IOThread and are numbered from 1 to the domain iothreads value.
>   Available for a disk device target configured to use "virtio" bus and "pci"
>   or "ccw" address types. Since 1.2.8 (QEMU 2.1)
> 
> Does it mean that virtio-scsi disks do not use iothreads?
> 
> I'm experiencing a horrible performance using nested vms (up to 2 levels of
> nesting) when accessing NFS storage running on one of the VMs. The NFS
> server is using scsi disk.

When you say  2 levels of nesting do you definitely have KVM enabled at
all levels, or are you ending up using TCG emulation, because the latter
would certainly explain terrible performance.

> 
> My theory is:
> - Writing to NFS server is very slow (too much nesting, slow disk)
> - Not using iothreads (because we don't use virtio?)
> - Guest CPU is blocked by slow I/O

Regards,
Daniel
-- 
|: https://berrange.com  -o-https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o-https://fstop138.berrange.com :|
|: https://entangle-photo.org-o-https://www.instagram.com/dberrange :|
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/WSFUURFZ4ANODXAVG2HDQKSQUMGXA77E/


[ovirt-devel] Libvirt driver iothread property for virtio-scsi disks

2020-11-04 Thread Nir Soffer
The docs[1] say:

- The optional iothread attribute assigns the disk to an IOThread as defined by
  the range for the domain iothreads value. Multiple disks may be assigned to
  the same IOThread and are numbered from 1 to the domain iothreads value.
  Available for a disk device target configured to use "virtio" bus and "pci"
  or "ccw" address types. Since 1.2.8 (QEMU 2.1)

Does it mean that virtio-scsi disks do not use iothreads?

I'm experiencing a horrible performance using nested vms (up to 2 levels of
nesting) when accessing NFS storage running on one of the VMs. The NFS
server is using scsi disk.

My theory is:
- Writing to NFS server is very slow (too much nesting, slow disk)
- Not using iothreads (because we don't use virtio?)
- Guest CPU is blocked by slow I/O

Does this make sense?

[1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms

Nir
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/AP3QPCGROL3HJ3JSTZOCOHQUGKN6IFRK/


[ovirt-devel] Re: [ovirt-users] Re: Removal of dpdk

2020-11-04 Thread Dominik Holler
Hi Florian,
thanks for your thoughts!

On Tue, Nov 3, 2020 at 3:21 PM Florian Schmid via Users 
wrote:

> Hi Ales,
>
> what do you mean with "not maintained for a long time"?
>

The oVirt integration of dpdk was not maintained.


> DPDK is heavily developed and make the linux network extremely fast.
>
> I don't think, that SR-IOV can replace it,
>

The removal of dpdk is about removing the dpdk support from oVirt hosts
only.
We wonder if there is someone using dpdk to attach oVirt VMs to physical
NICs.
We are aware that many users use SR-IOV, especially for scenarios of
enabling a high count of Ethernet frames for VMs or requiring a low latency,
but we are not aware of users using dpdk to connect the oVirt VMs to the
physical NICs of the host.



> because packets must be still processed by the kernel, which is really
> slow and CPU demanding.
>


In SR-IOV the packets might be processed by the guest kernel, not but the
host kernel.
oVirt is focused on the host kernel, while the guest OS is managed by the
user of oVirt.

Did this explanation address your concerns?

BR Florian
>
> --
> *Von: *"Ales Musil" 
> *An: *"Nir Soffer" 
> *CC: *"users" , "devel" 
> *Gesendet: *Dienstag, 3. November 2020 13:56:12
> *Betreff: *[ovirt-users] Re: Removal of dpdk
>
>
>
> On Tue, Nov 3, 2020 at 1:52 PM Nir Soffer  wrote:
>
>> On Tue, Nov 3, 2020 at 1:07 PM Ales Musil  wrote:
>>
>>> Hello,
>>> we have decided to remove dpdk in the upcoming version of oVirt namely
>>> 4.4.4. Let us know if there are any concerns about this.
>>>
>>
>> Can you give more info why we want to remove this feature, and what is
>> the replacement for existing users?
>>
>> Nir
>>
>
> Sure,
> the feature was only experimental and not maintained for a long time. The
> replacement is to use SR-IOV
> which is supported by oVirt.
>
> Thanks,
> Ales
>
>
> --
>
> Ales Musil
>
> Software Engineer - RHV Network
>
> Red Hat EMEA 
>
> amu...@redhat.comIM: amusil
> 
>
> ___
> Users mailing list -- us...@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/3FHIRQKEEKLGWLMSPHEJ3LOV3LPQZXPA/
> ___
> Users mailing list -- us...@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/us...@ovirt.org/message/ZCAYTHVZPOZF3MCJB3NMCIU467M33NMR/
>
>
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/DFPVTL77WC5BNCYB2QLLWCBSNOTPML2Q/


[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Marcin Sobczyk



On 11/4/20 11:29 AM, Yedidyah Bar David wrote:

On Wed, Nov 4, 2020 at 12:18 PM Marcin Sobczyk  wrote:



On 11/3/20 7:21 PM, Nir Soffer wrote:

On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk  wrote:

Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Awesome, thanks!

So setup_for_ost.sh finished successfully (after more than an hour),
but now I see conflicting documentation and comments about how to
run test suites and how to cleanup after the run.

The docs say:
https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html

  ./run_suite.sh basic-suite-4.0

But I see other undocumented ways in recent threads:

  run_tests

Trying the run_test option, from recent Mail:


. lagofy.sh
lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k 
/usr/share/ost-images/el8_id_rsa

This fails:

$ . lagofy.sh
Suite basic-suite-master - lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Add your group to qemu's group: "usermod -a -G qemu nsoffer"

setup_for_ost.sh should handle this, no?

It does:
https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b03b07f05f358e/common/setup/setup_playbook.yml#L95
Maybe you didn't relog so the group inclusion would be effective?
But I agree there should be a message printed to the user if relogging
is necessary - I will write a patch for it.


[nsoffer@ost ovirt-system-tests]$ lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
@ Initialize and populate prefix:
# Initialize prefix:
  * Create prefix dirs:
  * Create prefix dirs: Success (in 0:00:00)
  * Generate prefix uuid:
  * Generate prefix uuid: Success (in 0:00:00)
  * Copying ssh key:
  * Copying ssh key: Success (in 0:00:00)
  * Tag prefix as initialized:
  * Tag prefix as initialized: Success (in 0:00:00)
# Initialize prefix: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
  * Create disk nfs:
  * Create disk nfs: Success (in 0:00:00)
  * Create disk iscsi:
  * Create disk iscsi: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-0:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-1:
  * Create disk root:
  * Create disk root: Success (in 0:00:00)
# Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00)
# Copying any deploy scripts:
# Copying any deploy scripts: Success (in 0:00:00)
# calling yaml.load() without Loader=... is deprecated, as the
default Loader is unsafe. Please read https://msg.pyyaml.org/load for
full details.
# Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:00:01)
Error occured, aborting
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main
  cli_plugins[args.verb].do_run(args)
File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line
186, in do_run
  self._do_run(**vars(args))
File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init
  ssh_key=ssh_key,
File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143,
in virt_conf_from_stream
  ssh_key=ssh_key
File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269,
in virt_conf
  net_specs=conf['nets'],
File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__
  self._nets[name] = self._create_net(spec, compat)
File 

[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Yedidyah Bar David
On Wed, Nov 4, 2020 at 12:18 PM Marcin Sobczyk  wrote:
>
>
>
> On 11/3/20 7:21 PM, Nir Soffer wrote:
> > On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer  wrote:
> >> On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer  wrote:
> >>> On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk  wrote:
>  Hi All,
> 
>  there are multiple pieces of information floating around on how to set
>  up a machine
>  for running OST. Some of them outdated (like dealing with el7), some
>  of them more recent,
>  but still a bit messy.
> 
>  Not long ago, in some email conversation, Milan presented an ansible
>  playbook that provided
>  the steps necessary to do that. We've picked up the playbook, tweaked
>  it a bit, made a convenience shell script wrapper that runs it, and
>  pushed that into OST project [1].
> 
>  This script, along with the playbook, should be our
>  single-source-of-truth, one-stop
>  solution for the job. It's been tested by a couple of persons and
>  proved to be able
>  to set up everything on a bare (rh)el8 machine. If you encounter any
>  problems with the script
>  please either report it on the devel mailing list, directly to me, or
>  simply file a patch.
>  Let's keep it maintained.
> >>> Awesome, thanks!
> >> So setup_for_ost.sh finished successfully (after more than an hour),
> >> but now I see conflicting documentation and comments about how to
> >> run test suites and how to cleanup after the run.
> >>
> >> The docs say:
> >> https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html
> >>
> >>  ./run_suite.sh basic-suite-4.0
> >>
> >> But I see other undocumented ways in recent threads:
> >>
> >>  run_tests
> > Trying the run_test option, from recent Mail:
> >
> >> . lagofy.sh
> >> lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k 
> >> /usr/share/ost-images/el8_id_rsa
> > This fails:
> >
> > $ . lagofy.sh
> > Suite basic-suite-master - lago_init
> > /usr/share/ost-images/el8-engine-installed.qcow2 -k
> > /usr/share/ost-images/el8_id_rsa
> > Add your group to qemu's group: "usermod -a -G qemu nsoffer"
> >
> > setup_for_ost.sh should handle this, no?
> It does:
> https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b03b07f05f358e/common/setup/setup_playbook.yml#L95
> Maybe you didn't relog so the group inclusion would be effective?
> But I agree there should be a message printed to the user if relogging
> is necessary - I will write a patch for it.
>
> >
> > [nsoffer@ost ovirt-system-tests]$ lago_init
> > /usr/share/ost-images/el8-engine-installed.qcow2 -k
> > /usr/share/ost-images/el8_id_rsa
> > Using images ost-images-el8-host-installed-1-202011021248.x86_64,
> > ost-images-el8-engine-installed-1-202011021248.x86_64 containing
> > ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
> > vdsm-4.40.35.1-1.el8.x86_64
> > @ Initialize and populate prefix:
> ># Initialize prefix:
> >  * Create prefix dirs:
> >  * Create prefix dirs: Success (in 0:00:00)
> >  * Generate prefix uuid:
> >  * Generate prefix uuid: Success (in 0:00:00)
> >  * Copying ssh key:
> >  * Copying ssh key: Success (in 0:00:00)
> >  * Tag prefix as initialized:
> >  * Tag prefix as initialized: Success (in 0:00:00)
> ># Initialize prefix: Success (in 0:00:00)
> ># Create disks for VM lago-basic-suite-master-engine:
> >  * Create disk root:
> >  * Create disk root: Success (in 0:00:00)
> >  * Create disk nfs:
> >  * Create disk nfs: Success (in 0:00:00)
> >  * Create disk iscsi:
> >  * Create disk iscsi: Success (in 0:00:00)
> ># Create disks for VM lago-basic-suite-master-engine: Success (in 
> > 0:00:00)
> ># Create disks for VM lago-basic-suite-master-host-0:
> >  * Create disk root:
> >  * Create disk root: Success (in 0:00:00)
> ># Create disks for VM lago-basic-suite-master-host-0: Success (in 
> > 0:00:00)
> ># Create disks for VM lago-basic-suite-master-host-1:
> >  * Create disk root:
> >  * Create disk root: Success (in 0:00:00)
> ># Create disks for VM lago-basic-suite-master-host-1: Success (in 
> > 0:00:00)
> ># Copying any deploy scripts:
> ># Copying any deploy scripts: Success (in 0:00:00)
> ># calling yaml.load() without Loader=... is deprecated, as the
> > default Loader is unsafe. Please read https://msg.pyyaml.org/load for
> > full details.
> ># Missing current link, setting it to default
> > @ Initialize and populate prefix: ERROR (in 0:00:01)
> > Error occured, aborting
> > Traceback (most recent call last):
> >File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main
> >  cli_plugins[args.verb].do_run(args)
> >File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line
> > 186, in do_run
> >  self._do_run(**vars(args))
> >File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init
> 

[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Marcin Sobczyk



On 11/3/20 7:21 PM, Nir Soffer wrote:

On Tue, Nov 3, 2020 at 8:05 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 6:53 PM Nir Soffer  wrote:

On Tue, Nov 3, 2020 at 3:22 PM Marcin Sobczyk  wrote:

Hi All,

there are multiple pieces of information floating around on how to set
up a machine
for running OST. Some of them outdated (like dealing with el7), some
of them more recent,
but still a bit messy.

Not long ago, in some email conversation, Milan presented an ansible
playbook that provided
the steps necessary to do that. We've picked up the playbook, tweaked
it a bit, made a convenience shell script wrapper that runs it, and
pushed that into OST project [1].

This script, along with the playbook, should be our
single-source-of-truth, one-stop
solution for the job. It's been tested by a couple of persons and
proved to be able
to set up everything on a bare (rh)el8 machine. If you encounter any
problems with the script
please either report it on the devel mailing list, directly to me, or
simply file a patch.
Let's keep it maintained.

Awesome, thanks!

So setup_for_ost.sh finished successfully (after more than an hour),
but now I see conflicting documentation and comments about how to
run test suites and how to cleanup after the run.

The docs say:
https://ovirt-system-tests.readthedocs.io/en/latest/general/running_tests/index.html

 ./run_suite.sh basic-suite-4.0

But I see other undocumented ways in recent threads:

 run_tests

Trying the run_test option, from recent Mail:


. lagofy.sh
lago_init /usr/share/ost-images/el8-engine-installed.qcow2 -k 
/usr/share/ost-images/el8_id_rsa

This fails:

$ . lagofy.sh
Suite basic-suite-master - lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Add your group to qemu's group: "usermod -a -G qemu nsoffer"

setup_for_ost.sh should handle this, no?
It does: 
https://github.com/oVirt/ovirt-system-tests/blob/e1c1873d1e7de3f136e46b6355b03b07f05f358e/common/setup/setup_playbook.yml#L95

Maybe you didn't relog so the group inclusion would be effective?
But I agree there should be a message printed to the user if relogging 
is necessary - I will write a patch for it.




[nsoffer@ost ovirt-system-tests]$ lago_init
/usr/share/ost-images/el8-engine-installed.qcow2 -k
/usr/share/ost-images/el8_id_rsa
Using images ost-images-el8-host-installed-1-202011021248.x86_64,
ost-images-el8-engine-installed-1-202011021248.x86_64 containing
ovirt-engine-4.4.4-0.0.master.20201031195930.git8f858d6c01d.el8.noarch
vdsm-4.40.35.1-1.el8.x86_64
@ Initialize and populate prefix:
   # Initialize prefix:
 * Create prefix dirs:
 * Create prefix dirs: Success (in 0:00:00)
 * Generate prefix uuid:
 * Generate prefix uuid: Success (in 0:00:00)
 * Copying ssh key:
 * Copying ssh key: Success (in 0:00:00)
 * Tag prefix as initialized:
 * Tag prefix as initialized: Success (in 0:00:00)
   # Initialize prefix: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-engine:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
 * Create disk nfs:
 * Create disk nfs: Success (in 0:00:00)
 * Create disk iscsi:
 * Create disk iscsi: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-engine: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-0:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-0: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-1:
 * Create disk root:
 * Create disk root: Success (in 0:00:00)
   # Create disks for VM lago-basic-suite-master-host-1: Success (in 0:00:00)
   # Copying any deploy scripts:
   # Copying any deploy scripts: Success (in 0:00:00)
   # calling yaml.load() without Loader=... is deprecated, as the
default Loader is unsafe. Please read https://msg.pyyaml.org/load for
full details.
   # Missing current link, setting it to default
@ Initialize and populate prefix: ERROR (in 0:00:01)
Error occured, aborting
Traceback (most recent call last):
   File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 987, in main
 cli_plugins[args.verb].do_run(args)
   File "/usr/lib/python3.6/site-packages/lago/plugins/cli.py", line
186, in do_run
 self._do_run(**vars(args))
   File "/usr/lib/python3.6/site-packages/lago/cmd.py", line 207, in do_init
 ssh_key=ssh_key,
   File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1143,
in virt_conf_from_stream
 ssh_key=ssh_key
   File "/usr/lib/python3.6/site-packages/lago/prefix.py", line 1269,
in virt_conf
 net_specs=conf['nets'],
   File "/usr/lib/python3.6/site-packages/lago/virt.py", line 101, in __init__
 self._nets[name] = self._create_net(spec, compat)
   File "/usr/lib/python3.6/site-packages/lago/virt.py", line 113, in 
_create_net
 return cls(self, net_spec, compat=compat)
   File 

[ovirt-devel] Re: How to set up a (rh)el8 machine for running OST

2020-11-04 Thread Milan Zamazal
Nir Soffer  writes:

> So setup_for_ost.sh finished successfully (after more than an hour),

The long part is probably downloading OST images if you do it over a
slow line.  I don't think there is anything else there that takes a very
long time to execute.
___
Devel mailing list -- devel@ovirt.org
To unsubscribe send an email to devel-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/devel@ovirt.org/message/XKZ7U32E7OQPL5FYMJVKEL7BCLU5OAKQ/