[ovirt-users] Re: Architecture design docs

2021-08-05 Thread Tony Pearce
Hi Jesse, did you see this? https://www.ovirt.org/documentation/

kind regards,

Tony Pearce



On Fri, 6 Aug 2021 at 13:16, Jesse Hu  wrote:

> Hi there, is there any any architecture design docs for oVirt (Engine &
> Node & VDSM) for neebee to learn? https://www.ovirt.org/Architecture
> gives 404. Thanks.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXXYO4KAINCVW4ENQOURATYY3OW5HRXQ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CHNH2ZJWTNWLFFY3Y7DC2KQ6DSZBNJED/


[ovirt-users] Architecture design docs

2021-08-05 Thread Jesse Hu
Hi there, is there any any architecture design docs for oVirt (Engine & Node & 
VDSM) for neebee to learn? https://www.ovirt.org/Architecture gives 404. Thanks.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VXXYO4KAINCVW4ENQOURATYY3OW5HRXQ/


[ovirt-users] Re: Question about PCI storage passthrough for a single guest VM

2021-08-05 Thread Tony Pearce
My apologies for duplicating posts - they initially got stuck and I really
wanted to reach out to the group with the query to try and discover
unknowns.

Passing through the whole pci nvme device is fine, because the VM is locked
to the host due to the gpu pci pass through anyway. I will implement a
mechanism to protect the data on the single disk in both cases.

I'm not exactly sure what type of disk writes are being used, it's a
learning model being trained by the gpu's. I'll try and find out more.
After I finished the config I searched online to get some basic throughput
test for the disk. Here's the commands and results taken at that time
(below).

*Test on host with "local storage" (using a disk image on the nvme drive)*
# dd if=/dev/zero of=test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.92561 s, 558 MB/s

*Test on host with nvme pass through*

# dd if=/dev/zero of=/mnt/nvme/tmpflag bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.42554 s, 753 MB/s

In both cases the nvme was used as a mounted additional drive. The OS is
booting on different disk image, which is located in a Storage Domain over
iscsi.

I'm not anything close to a storage expert but I understand the gist of the
descriptions I find when searching about the dd parameters. Since it looks
like both configurations are going to be OK for longevity I'll aim to test
both scenarios live and choose the one which gives the best result for the
workload.

Thanks a lot for your reply and help :)

Tony Pearce


On Fri, 6 Aug 2021 at 03:28, Thomas Hoberg  wrote:

> You gave some different details in your other post, but here you mention
> use of GPU pass through.
>
> Any pass through will lose you the live migration ability, but
> unfortunately with GPUs, that's just how it is these days: while those
> could in theory be moved when the GPUs were identical (because their amount
> of state is limited to VRAM size), the support code (and kernel
> interfaces?) simply does not exist today.
>
> In that scenario a pass-through storage device won't lose you anything you
> still have.
>
> But you'll have to remember that PCI pass-through works only at the
> granularity of a whole PCI device. That's fine with (an entire) NVMe,
> because these combine "disks" and "controller", not so fine with individual
> disks on a SATA or SCSI controller. And you certainly can't pass through
> partitions!
>
> It gets to be really fun with cascaded USB and I haven't really tried
> Thunderbolt either (mostly because I have given up on CentOS8/oVirt 4.4)
>
> But generally the VirtIOSCSI interface imposes so little overhead, it only
> becomes noticeable when you run massive amounts of tiny I/O on NVMe. Play
> with the block sizes and the sync flag on your DD tests to see the
> differences, I've had lots of fun (and some disillusions) with that, but
> mostly with Gluster storage over TCP/IP on Ethernet.
>
> If that's really where your bottlenecks are coming from, you may want to
> look at architecture rather than pass-through.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CJPD6TKL4M44O77RECZYTNVNSSMXJRU/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MKMN7YSEWX4EREE2H5OEOHH6455CH7FX/


[ovirt-users] Re: ISO Upload in in Paused by System Status

2021-08-05 Thread louisb
I obtained the Certificate from the link on from the ovirt console main page.  
The certificate has been save to storage.  I attempt to import the certificate 
into a FireFox Browser and get the following message: 

Please enter the password that was used to encrypt this certificate backup:

I enter in the same password used during the installation of ovirt.  After 
entering in the password the following message is displayed:

Failed to decode the file. Either it is not in PKCS #12 format, has been 
corrupted, or the password you entered was incorrect.

What could be the problem here, I don't have another password to enter?  

Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/D3DRU37QBB4V43M2NT7BQKEUDHCFI635/


[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Oh I got it. --enablerepo is on yum/dnf, and not mlnxofedinstall.

Alright, I run mlnxofedinstall without arguments. That would do the job. Thank 
you Edward!

On 5 Aug 2021, at 17:26, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

The ovirt node-ng installer iso creates imgbased systems with baseos and 
appstream repos disabled,
not something you would have with a regular base OS installed system with added 
oVirt repos...

so with node, 'dnf install foo' usually fails if not adding an extra 
--enablerepo flag, which seems to change with the OS version.

here's some old notes I had.

download latest mlnx ofed archive
tar xfvz *.tgz
cd *64

# mount -o loop MLNX*iso /mnt
# cd /mnt

#./mlnxinstall requires more RPMS to be installed
# note: some versions of CentOS use different case reponames, look at contents 
of /etc/yum.repos.d files
yum --enablerepo baseos install perl-Term-ANSIColor
yum --enablerepo baseos --enablerepo appstream install perl-Getopt-Long tcl 
gcc-gfortran tcsh tk make
./mlnxinstall

On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:
Hi Edward, it seems that running mlnxofedinstall would do the job. Although 
I've some questions.

You mentioned the --enable-repo option but I didn't find it. There's a disable 
one, so I'm assuming that it's enabled by default. Anyway there's no repos 
added after the script.

I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma -vvv; 
and everything went fine:

[root@rhvepyc2 mnt]# /etc/init.d/openibd status

  HCA driver loaded

Configured IPoIB devices:
ib0

Currently active IPoIB devices:
ib0
Configured Mellanox EN devices:

Currently active Mellanox devices:
ib0

The following OFED modules are loaded:

  rdma_ucm
  rdma_cm
  ib_ipoib
  mlx5_core
  mlx5_ib
  ib_uverbs
  ib_umad
  ib_cm
  ib_core
  mlxfw

[root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
libibverbs-54mlnx1-1.54103.x86_64
infiniband-diags-54mlnx1-1.54103.x86_64
mlnx-ethtool-5.10-1.54103.x86_64
rdma-core-54mlnx1-1.54103.x86_64
dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
mlnx-tools-5.2.0-0.54103.x86_64
libibumad-54mlnx1-1.54103.x86_64
opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
ibacm-54mlnx1-1.54103.x86_64
dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
rdma-core-devel-54mlnx1-1.54103.x86_64
opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
srp_daemon-54mlnx1-1.54103.x86_64
sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
mlnx-iproute2-5.11.0-1.54103.x86_64
kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-54mlnx1-1.54103.x86_64
opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnxofed-docs-5.4-1.0.3.0.noarch
opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-utils-54mlnx1-1.54103.x86_64
mlnx-fw-updater-5.4-1.0.3.0.x86_64
kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
libibverbs-utils-54mlnx1-1.54103.x86_64
ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64

As a final question, did you selected the option: --add-kernel-support on the 
script? I couldn't find the difference between enabling it or not.

Thank you.

On 5 Aug 2021, at 15:20, Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:

Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV 

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Edward Berger
The ovirt node-ng installer iso creates imgbased systems with baseos and
appstream repos disabled,
not something you would have with a regular base OS installed system with
added oVirt repos...

so with node, 'dnf install foo' usually fails if not adding an extra
--enablerepo flag, which seems to change with the OS version.

here's some old notes I had.

download latest mlnx ofed archive
tar xfvz *.tgz
cd *64

# mount -o loop MLNX*iso /mnt
# cd /mnt

#./mlnxinstall requires more RPMS to be installed
# note: some versions of CentOS use different case reponames, look at
contents of /etc/yum.repos.d files
yum --enablerepo baseos install perl-Term-ANSIColor
yum --enablerepo baseos --enablerepo appstream install
perl-Getopt-Long tcl gcc-gfortran tcsh tk make
./mlnxinstall


On Thu, Aug 5, 2021 at 3:32 PM Vinícius Ferrão 
wrote:

> Hi Edward, it seems that running mlnxofedinstall would do the job.
> Although I've some questions.
>
> You mentioned the --enable-repo option but I didn't find it. There's a
> disable one, so I'm assuming that it's enabled by default. Anyway there's
> no repos added after the script.
>
> I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma
> -vvv; and everything went fine:
>
> [root@rhvepyc2 mnt]# /etc/init.d/openibd status
>
>   HCA driver loaded
>
> Configured IPoIB devices:
> ib0
>
> Currently active IPoIB devices:
> ib0
> Configured Mellanox EN devices:
>
> Currently active Mellanox devices:
> ib0
>
> The following OFED modules are loaded:
>
>   rdma_ucm
>   rdma_cm
>   ib_ipoib
>   mlx5_core
>   mlx5_ib
>   ib_uverbs
>   ib_umad
>   ib_cm
>   ib_core
>   mlxfw
>
> [root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
> libibverbs-54mlnx1-1.54103.x86_64
> infiniband-diags-54mlnx1-1.54103.x86_64
> mlnx-ethtool-5.10-1.54103.x86_64
> rdma-core-54mlnx1-1.54103.x86_64
> dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> mlnx-tools-5.2.0-0.54103.x86_64
> libibumad-54mlnx1-1.54103.x86_64
> opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
> ibacm-54mlnx1-1.54103.x86_64
> dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> rdma-core-devel-54mlnx1-1.54103.x86_64
> opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> srp_daemon-54mlnx1-1.54103.x86_64
> sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
> mlnx-iproute2-5.11.0-1.54103.x86_64
> kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-54mlnx1-1.54103.x86_64
> opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
> dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
> mlnxofed-docs-5.4-1.0.3.0.noarch
> opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
> knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
> librdmacm-utils-54mlnx1-1.54103.x86_64
> mlnx-fw-updater-5.4-1.0.3.0.x86_64
> kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
> libibverbs-utils-54mlnx1-1.54103.x86_64
> ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64
>
> As a final question, did you selected the option: --add-kernel-support on
> the script? I couldn't find the difference between enabling it or not.
>
> Thank you.
>
> On 5 Aug 2021, at 15:20, Vinícius Ferrão 
> wrote:
>
> Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your
> idea. I'll do this test right now and report back. Ideally using the repo
> would guarantee an easy upgrade path between release, but Mellanox is
> lacking on this part.
>
> And yes Edward, I want to use the virtual Infiniband interfaces too.
>
> Thank you.
>
> On 5 Aug 2021, at 10:52, Edward Berger  wrote:
>
> I don't know if you can just remove the gluster-rdma rpm.
>
> I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
> mellanox tar/iso and
> running the mellanox install script after adding the required dependencies
> with --enable-repo,
> which isn't the same as adding a repository and 'dnf install'.  So I would
> try that on a test host.
>
> I use it for the 'virtual infiniband' interfaces that get attached to VMs
> as 'host device passthru'.
>
> I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3)
> and 7.9 (node 4.4.4.1/CentOS8.3).
> unlike your glusterfs version 6.0.x
>
> I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream)
> soon to see how that works out.
>
>
>
> On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
> wrote:
>
>> Hello,
>>
>> Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each
>> other?
>>
>> The real issue is regarding GlusterFS. It seems to be a Mellanox issue,
>> but I would like to know if there's something that we can do make both play
>> nice on the same machine:

[ovirt-users] Re: import from vmware provider always failis

2021-08-05 Thread Thomas Hoberg
Honestly, this sounds like a $1000 advice!

Thanks for sharing!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/V6LTN7SHO264NC6HLB6VS2HENL5CT23A/


[ovirt-users] Re: Terrible Disk Performance on Windows 10 VM

2021-08-05 Thread Thomas Hoberg
You found the issue!

VirtIOSCSI can only do its magic, when it's actually used. And once the boot 
disks was running using AHCI emulation it's a little hard to make it 
"re-attach" to SCSI.

I am pretty sure it could be done, like you could make Windows disks switch 
from IDE to SATA/AHCI with a bit of twiddling in the registry using recovery 
mode.

I think I managed to feed the Windows installer a floppy image with the VirtIO 
drivers at one point, but it took me half a workday I think...

That is one of the reasons I like importing my Windows VMs for oVirt from 
VirtualBox. And it doesn't seem to like the OVAs VMware produces.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4PLHRCBHNXMZBI3MPG53CD23SWXFMQZR/


[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Hi Edward, it seems that running mlnxofedinstall would do the job. Although 
I've some questions.

You mentioned the --enable-repo option but I didn't find it. There's a disable 
one, so I'm assuming that it's enabled by default. Anyway there's no repos 
added after the script.

I've run the script with the arguments: ./mlnxofedinstall --with-nfsrdma -vvv; 
and everything went fine:

[root@rhvepyc2 mnt]# /etc/init.d/openibd status

  HCA driver loaded

Configured IPoIB devices:
ib0

Currently active IPoIB devices:
ib0
Configured Mellanox EN devices:

Currently active Mellanox devices:
ib0

The following OFED modules are loaded:

  rdma_ucm
  rdma_cm
  ib_ipoib
  mlx5_core
  mlx5_ib
  ib_uverbs
  ib_umad
  ib_cm
  ib_core
  mlxfw

[root@rhvepyc2 mnt]# rpm -qa | grep -i mlnx
libibverbs-54mlnx1-1.54103.x86_64
infiniband-diags-54mlnx1-1.54103.x86_64
mlnx-ethtool-5.10-1.54103.x86_64
rdma-core-54mlnx1-1.54103.x86_64
dapl-utils-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
kmod-mlnx-nfsrdma-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
mlnx-tools-5.2.0-0.54103.x86_64
libibumad-54mlnx1-1.54103.x86_64
opensm-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
kmod-kernel-mft-mlnx-4.17.0-1.rhel8u4.x86_64
ibacm-54mlnx1-1.54103.x86_64
dapl-devel-static-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
ar_mgr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
rdma-core-devel-54mlnx1-1.54103.x86_64
opensm-static-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
srp_daemon-54mlnx1-1.54103.x86_64
sharp-2.5.0.MLNX20210613.83fe753-1.54103.x86_64
mlnx-iproute2-5.11.0-1.54103.x86_64
kmod-knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-54mlnx1-1.54103.x86_64
opensm-libs-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
mlnx-ofa_kernel-devel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
dapl-devel-2.1.10.1.mlnx-OFED.4.9.0.1.4.54103.x86_64
dump_pr-1.0-5.9.0.MLNX20210617.g5dd71ee.54103.x86_64
mlnxofed-docs-5.4-1.0.3.0.noarch
opensm-devel-5.9.0.MLNX20210617.c9f2ade-0.1.54103.x86_64
knem-1.1.4.90mlnx1-OFED.5.1.2.5.0.1.rhel8u4.x86_64
librdmacm-utils-54mlnx1-1.54103.x86_64
mlnx-fw-updater-5.4-1.0.3.0.x86_64
kmod-mlnx-ofa_kernel-5.4-OFED.5.4.1.0.3.1.rhel8u4.x86_64
libibverbs-utils-54mlnx1-1.54103.x86_64
ibutils2-2.1.1-0.136.MLNX20210617.g4883fca.54103.x86_64

As a final question, did you selected the option: --add-kernel-support on the 
script? I couldn't find the difference between enabling it or not.

Thank you.

On 5 Aug 2021, at 15:20, Vinícius Ferrão 
mailto:fer...@versatushpc.com.br>> wrote:

Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of 

[ovirt-users] Re: Question about PCI storage passthrough for a single guest VM

2021-08-05 Thread Thomas Hoberg
You gave some different details in your other post, but here you mention use of 
GPU pass through.

Any pass through will lose you the live migration ability, but unfortunately 
with GPUs, that's just how it is these days: while those could in theory be 
moved when the GPUs were identical (because their amount of state is limited to 
VRAM size), the support code (and kernel interfaces?) simply does not exist 
today.

In that scenario a pass-through storage device won't lose you anything you 
still have.

But you'll have to remember that PCI pass-through works only at the granularity 
of a whole PCI device. That's fine with (an entire) NVMe, because these combine 
"disks" and "controller", not so fine with individual disks on a SATA or SCSI 
controller. And you certainly can't pass through partitions!

It gets to be really fun with cascaded USB and I haven't really tried 
Thunderbolt either (mostly because I have given up on CentOS8/oVirt 4.4)

But generally the VirtIOSCSI interface imposes so little overhead, it only 
becomes noticeable when you run massive amounts of tiny I/O on NVMe. Play with 
the block sizes and the sync flag on your DD tests to see the differences, I've 
had lots of fun (and some disillusions) with that, but mostly with Gluster 
storage over TCP/IP on Ethernet.

If that's really where your bottlenecks are coming from, you may want to look 
at architecture rather than pass-through.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/6CJPD6TKL4M44O77RECZYTNVNSSMXJRU/


[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-05 Thread Thomas Hoberg
If you manage to export the disk image via the GUI, the result should be a 
qcow2 format file, which you can mount/attach to anything Linux (well, if the 
VM was Linux... it didn't say)

But it's perhaps easier to simply try to attach the disk of the failed VM as a 
secondary to a live VM to recover the data.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SLXLQ4BLQUPBV5355DFFACF6LFJX4MWY/


[ovirt-users] Re: Data recovery from (now unused, but still mounted) Gluster Volume for a single VM

2021-08-05 Thread Thomas Hoberg
First off, I have very little hope, you'll be able to recover your data working 
at gluster level...

And then there is a lot of information missing between the lines: I guess you 
are using a 3 node HCI setup and were adding new disks (/dev/sdb) on all three 
nodes and trying to move the glusterfs to those new bigger disks?

Resizing/moving/adding or removing disks are "natural" operations for Gluster. 
But oVirt isn't "gluster native" and may not be so forgiving if you just swap 
device paths on bricks.


Practical guides on how to replace the storage without down time (after all 
this is a HA solution, right?) are somehow missing from the oVirt 
documentation, and if I was a rich man, perhaps I'd get myself an RHV support 
contract and see if RHEL engineers would say anything but "not supported".


The first thing I'd recommend is to create some temporary space. I found using 
an extra disk as NFS storage on one of the hosts was a good way to gain some 
maneuvering room e.g. for backups.

You can try to attach the disk of the broken VM as a secondary to another good 
VM to see if the data can be salvaged from there. But before you attach it (and 
perhaps an automatic fsck ruins it for you), you can perhaps create a copy to 
the NFS export/backup (domain).

If you weren't out of space, you'd just create a local copy and work with that. 
You can also try exporting the disk image, but there is a lot of untested or 
slow code in that operation from my experience.

If that image happens to be empty (I've seen that happen) or the data on it 
cannot be recovered, there is little to be gained, by trying to work at the 
GlusterFS level. The logical disk image file will be chunked into 64MB bits and 
their order is buried deep either in GlusterFS or in oVirt and perhaps your 
business is the better place to invest your energy.

But there is a good chance the data portion of that disk image still has your 
data. The fact that oVirt/KVM generally pauses VMs when it has issues with the 
storage, tends to preserve and protect your data rather better than what 
happens when physical hosts suffer brown outs or power glitches.

I guess you'll have learned that oVirt doesn't protect you from making 
mistakes, it only tries to offer some resilience against faults.

It's good and valuable to report these things, because it helps others to 
learn, too.

I sincerely hope you'll make do!
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/XSCXBSKCI3RKM5FUH57WG6JCMHOE7PMB/


[ovirt-users] Re: Question about pci passthrough for guest (SSD passthrough) ?

2021-08-05 Thread Thomas Hoberg

> 
> The caveat with local storage is that I can only use the remaining free
> space in /var/ for disk images. The result is the 1TB SSD has around
> 700GB remaining free space.
> 
> So I was wondering about simply passing through the nvme ssd (PCI) to the
> guest, so the guest can utilise the fill SSD.
> 
> Are there any "gotcha's" with doing this other than the usual gpu
> passthrough ones?
> 

The caveat is that you cannot pass through a partial SSD, only a whole device. 
And as a matter of fact, I'd say you can only pass through entire PCI(e) 
devices, so traditional SCSI/SATA disks might not work individually, you'd have 
to pass through the entire controller.

Effectively you'd "unplug" the NVMe from the host and plug it into the guest 
and if that wasn't stopped by some sanity check, neither system would continue 
to run for long, if that is your boot device (or the disk is used otherwise).

I know 3x more IOPS sound attractive, but is your workload really that disk 
bound? Perhaps you'll need to think about some distributed memory cache.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/O32Z2UC7X5AKN6A6Q5HNS77EVYCY3CN6/


[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão
Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your idea. 
I'll do this test right now and report back. Ideally using the repo would 
guarantee an easy upgrade path between release, but Mellanox is lacking on this 
part.

And yes Edward, I want to use the virtual Infiniband interfaces too.

Thank you.

On 5 Aug 2021, at 10:52, Edward Berger 
mailto:edwber...@gmail.com>> wrote:

I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the 
mellanox tar/iso and
running the mellanox install script after adding the required dependencies with 
--enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would try 
that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs as 
'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 
4.4.4.0/CentOS8.3) and 7.9 (node 
4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream) soon 
to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
 PackageArchitectureVersion 
  RepositorySize
=
Installing dependencies:
 openvswitchx86_64  2.14.1-1.54103  
  mlnx_ofed_5.4-1.0.3.0_base17 M
 ovirt-openvswitch  noarch  2.11-1.el8ev
  rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms  8.7 k
 replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unboundx86_64  1.7.3-15.el8
  rhel-8-for-x86_64-appstream-rpms 895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs  x86_64  3.12.2-40.2.el8 
  rhel-8-for-x86_64-baseos-rpms558 k
 glusterfs  x86_64  6.0-15.el8  
  rhel-8-for-x86_64-baseos-rpms658 k
 glusterfs  x86_64  6.0-20.el8  

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Vinícius Ferrão via Users
Yes it is deprecated on RHGS 3.5; but I really don't care for Gluster and I 
don't use it. What I would like to use is things like NFS over RDMA, that only 
Mellanox OFED provides and the host have other users that we need MLNX OFED to 
get support from Mellanox.

That's why I'm trying to install the MLNX OFED distribution. This is a 
development machine, it's not for production so we don't care we things break. 
But even when I try to force the install of MLNX OFED packages things does not 
work as expected.

Thank you.

On 5 Aug 2021, at 06:55, Strahil Nikolov 
mailto:hunter86...@yahoo.com>> wrote:

As far as I know rdma is deprecated ong glusterfs, but it most probably works.

Best Regards,
Strahil Nikolov

On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users
mailto:users@ovirt.org>> wrote:
Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
PackageArchitectureVersion  
RepositorySize
=
Installing dependencies:
openvswitchx86_64  2.14.1-1.54103   
 mlnx_ofed_5.4-1.0.3.0_base17 M
ovirt-openvswitch  noarch  2.11-1.el8ev 
 rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms  8.7 k
replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
unboundx86_64  1.7.3-15.el8 
 rhel-8-for-x86_64-appstream-rpms895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
glusterfs  x86_64  3.12.2-40.2.el8  
rhel-8-for-x86_64-baseos-rpms558 k
glusterfs  x86_64  6.0-15.el8   
 rhel-8-for-x86_64-baseos-rpms658 k
glusterfs  x86_64  6.0-20.el8   
 rhel-8-for-x86_64-baseos-rpms659 k
glusterfs  x86_64  6.0-37.el8   
 rhel-8-for-x86_64-baseos-rpms663 k
glusterfs  x86_64  6.0-37.2.el8 
 rhel-8-for-x86_64-baseos-rpms662 k
Skipping packages with broken dependencies:

[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-05 Thread Nir Soffer
On Thu, Aug 5, 2021 at 5:12 PM KK CHN  wrote:
> I have installed the  ovirt-engine-sdk-python using pip3  in my python3 
> virtaul environment in my personal laptop

I'm not sure this is the right version. Use the rpms provided by ovirt instead.

...
> and Created file  in the user kris home directory in the same laptop  // Is 
> what I am doing right ?
>
> (base) kris@my-ThinkPad-X270:~$ cat ~/.config/ovirt.conf
> [engine-dev]

This can be any name you like for this setup.

> engine_url=https://engine-dev   // what is this engine url ? its the rhevm 
> ovirt url this our service provider may can provide right ?

This is your engine url, the same url you access engine UI.

> username=admin@internal
> password=mypassword
> cafile=/etc/pki/vdsm/certs/cacert.pem // I dont have any cacert.pem file  
> in my laptop's /etc/pki/vdsm/certs/cacert.pem  no folder at all like this

This path works on ovirt host. You can download engine cafile from your
engine using:

curl -k 
'https://engine-dev/ovirt-engine/services/pki-resource?resource=ca-certificate=X509-PEM-CA'
> engine-dev.pem

and use the path to the cafile:

cafile=/home/kris/engine-dev.pem

...
> But  I couldn't find any examples folder where I can find the 
> download_disk.py   // So I have downloaded files for 
> ovirt-engne-sdk-python-4.1.3.tar.gz
>
> and untarred the files where I am able to find the  download_disk.py

You need to use ovirt sdk from 4.4. 4.1 sdk is too old.

Also if  you try to run this on another host, you need to install
more packages.

1. Install ovirt release rpm

dnf install https://resources.ovirt.org/pub/yum-repo/ovirt-release44.rpm

2. Install required packages

dnf install python3-ovirt-engine-sdk4 ovirt-imageio-client

$ rpm -q python3-ovirt-engine-sdk4 ovirt-imageio-client
python3-ovirt-engine-sdk4-4.4.13-1.el8.x86_64
ovirt-imageio-client-2.2.0-1.el8.x86_64

$ find /usr/share/ -name download_disk.py
/usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py

Now you can use download_disk.py to download images from ovirt setup.

...
> Can I execute now the following from my laptop ? so that it will connect to 
> the rhevm host node and download the disks ?

Yes

> (base) kris@my-ThinkPad-X270:$ python3 download_disk.py  -c engine-dev 
> MY_vm_blah_Id /var/tmp/disk1.raw  //is this correct ?

Almost, see the help:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py -h
usage: download_disk.py [-h] -c CONFIG [--debug] [--logfile LOGFILE]
[-f {raw,qcow2}] [--use-proxy]
[--max-workers MAX_WORKERS]
[--buffer-size BUFFER_SIZE]
[--timeout-policy {legacy,pause,cancel}]
disk_uuid filename

Download disk

positional arguments:
  disk_uuid Disk UUID to download.
  filename  Path to write downloaded image.

optional arguments:
  -h, --helpshow this help message and exit
  -c CONFIG, --config CONFIG
Use engine connection details from [CONFIG] section in
~/.config/ovirt.conf.
  --debug   Log debug level messages to logfile.
  --logfile LOGFILE Log file name (default example.log).
  -f {raw,qcow2}, --format {raw,qcow2}
Downloaded file format. For best compatibility, use
qcow2 (default qcow2).
  --use-proxy   Download via proxy on the engine host (less
efficient).
  --max-workers MAX_WORKERS
Maximum number of workers to use for download. The
default (4) improves performance when downloading a
single disk. You may want to use lower number if you
download many disks in the same time.
  --buffer-size BUFFER_SIZE
Buffer size per worker. The default (4194304) gives
good performance with the default number of workers.
If you use smaller number of workers you may want use
larger value.
  --timeout-policy {legacy,pause,cancel}
The action to be made for a timed out transfer


Example command to download disk id 3649d84b-6f35-4314-900a-5e8024e3905c
from engine configuration myengine to file disk.img, converting the
format to raw:

$ python3 /usr/share/doc/python3-ovirt-engine-sdk4/examples/download_disk.py
-c myengine --format raw 3649d84b-6f35-4314-900a-5e8024e3905c disk.img
[   0.0 ] Connecting...
[   0.5 ] Creating image transfer...
[   2.8 ] Transfer ID: 62c99f08-e58c-4cc2-8c72-9aa9be835d0f
[   2.8 ] Transfer host name: host4
[   2.8 ] Downloading image...
[ 100.00% ] 6.00 GiB, 11.62 seconds, 528.83 MiB/s
[  14.4 ] Finalizing image transfer...

You can check the image with qemu-img info:

$ qemu-img info disk.img
image: disk.img
file format: 

[ovirt-users] HA VM and vm leases usage with site failure

2021-08-05 Thread Gianluca Cecchi
Hello,
supposing latest 4.4.7 environment installed with an external engine and
two hosts, one in one site and one in another site.
For storage I have one FC storage domain.
I try to simulate a sort of "site failure scenario" to see what kind of HA
I should expect.

The 2 hosts have power mgmt configured through fence_ipmilan.

I have 2 VMs, one configured as HA with lease on storage (Resume Behavior:
kill) and one not marked as HA.

Initially host1 is SPM and it is the host that runs the two VMs.

Fencing of host1 from host2 initially works ok. I can test also from
command line:
# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
Status: ON

On host2 I then prevent reaching host1 iDRAC:
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 0 -d 10.10.193.152 -p
udp --dport 623 -j DROP
firewall-cmd --direct --add-rule ipv4 filter OUTPUT 1 -j ACCEPT

so that:

# fence_ipmilan -a 10.10.193.152 -P -l my_fence_user -A password -L
operator -S /usr/local/bin/pwd.sh -o status
2021-08-05 15:06:07,254 ERROR: Failed: Unable to obtain correct plug status
or plug is not available

On host1 I generate panic:
# date ; echo 1 > /proc/sys/kernel/sysrq ; echo c > /proc/sysrq-trigger
Thu Aug  5 15:06:24 CEST 2021

host1 correctly completes its crash dump (kdump integration is enabled) and
reboots, but I stop it at grub prompt so that host1 is unreachable from
host2 point of view and also power fencing not determined

At this point I thought that VM lease functionality would have come in
place and host2 would be able to re-start the HA VM, as it is able to see
that the lease is not taken from the other host and so it can acquire the
lock itself
Instead it goes through the attempt to power fence loop
I wait about 25 minutes without any effect but continuous attempts.

After 2 minutes host2 correctly becomes SPM and VMs are marked as unknown

At a certain point after the failures in power fencing host1, I see the
event:

Failed to power fence host host1. Please check the host status and it's
power management settings, and then manually reboot it and click "Confirm
Host Has Been Rebooted"

If I select host and choose "Confirm Host Has Been Rebooted", then the two
VMs are marked as down and the HA one is correctly booted by host2.

But this requires my manual intervention.

Is the behavior above the expected one or the use of VM leases should have
allowed host2 to bypass fencing inability and start the HA VM with lease?
Otherwise I don't understand the reason to have the lease itself at all

Thanks,
Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FK254O4WOPWV56F753BVSK5GYQFZ4E5Q/


[ovirt-users] Re: Combining Virtual machine image with multiple disks attached

2021-08-05 Thread KK CHN
Hi all,

I have installed the  ovirt-engine-sdk-python using pip3  in my python3
virtaul environment in my personal laptop


(base) kris@my-ThinkPad-X270:~$ pip install ovirt-engine-sdk-python
Collecting ovirt-engine-sdk-python
  Downloading ovirt-engine-sdk-python-4.4.14.tar.gz (335 kB)
 || 335 kB 166 kB/s
Collecting pycurl>=7.19.0
  Downloading pycurl-7.43.0.6.tar.gz (222 kB)
 || 222 kB 496 kB/s
Requirement already satisfied: six in
./training/lib/python3.7/site-packages (from ovirt-engine-sdk-python)
(1.15.0)
Building wheels for collected packages: ovirt-engine-sdk-python, pycurl
  Building wheel for ovirt-engine-sdk-python (setup.py) ... done
  Created wheel for ovirt-engine-sdk-python:
filename=ovirt_engine_sdk_python-4.4.14-cp37-cp37m-linux_x86_64.whl
size=300970
sha256=128ee03642c36094d62a04435bf5def1e5f8eb2c800f97132e8da02665d227a8
  Stored in directory:
/home/kris/.cache/pip/wheels/2f/65/60/6a222dcdec777ae59bacb3f51a0a93e6cf9547b82cb0102db6
  Building wheel for pycurl (setup.py) ... done
  Created wheel for pycurl:
filename=pycurl-7.43.0.6-cp37-cp37m-linux_x86_64.whl size=269770
sha256=2cf705a2246041f9eaa5cbb19c530470edad7f476a0a7eb04c4877931699bea9
  Stored in directory:
/home/kris/.cache/pip/wheels/f2/32/dc/9ccf4566cfe0a7a11ee304c11af36a1e341a16fa30e74fb26e
Successfully built ovirt-engine-sdk-python pycurl
Installing collected packages: pycurl, ovirt-engine-sdk-python
Successfully installed ovirt-engine-sdk-python-4.4.14 pycurl-7.43.0.6
WARNING: You are using pip version 21.0.1; however, version 21.2.2 is
available.
You should consider upgrading via the '/home/kris/training/bin/python3.7 -m
pip install --upgrade pip' command.
(base) kris@my-ThinkPad-X270:~$ which pip
/home/kris/anaconda3/bin/pip
(base) kris@my-ThinkPad-X270:~$ pip --version
pip 21.0.1 from /home/kris/training/lib/python3.7/site-packages/pip (python
3.7)
(base) kris@my-ThinkPad-X270:~$ pwd
/home/kris


and Created file  in the user kris home directory in the same laptop  // Is
what I am doing right ?

(base) kris@my-ThinkPad-X270:~$ cat ~/.config/ovirt.conf
[engine-dev]
engine_url=https://engine-dev   // what is this engine url ? its the rhevm
ovirt url this our service provider may can provide right ?
username=admin@internal
password=mypassword
cafile=/etc/pki/vdsm/certs/cacert.pem // I dont have any cacert.pem
file  in my laptop's /etc/pki/vdsm/certs/cacert.pem  no folder at all like
this
(base) kris@my-ThinkPad-X270:~$ pwd
/home/kris
(base) kris@my-ThinkPad-X270:~$


But  I couldn't find any examples folder where I can find the
download_disk.py   // So I have downloaded files for
ovirt-engne-sdk-python-4.1.3.tar.gz

and untarred the files where I am able to find the  download_disk.py

#

(base) 
kris@my-ThinkPad-X270:~/OVIRT_PYTON_SDK_SOURCE_FILES/ovirt-engine-sdk-python-4.1.3/examples$
ls
add_affinity_label.pyadd_vm_from_template_version.py
 follow_vm_links.py   set_vm_lease_storage_domain.py
add_bond.py  add_vm_nic.py
 get_display_ticket.pyset_vm_serial_number.py
add_cluster.py   add_vm.py
 import_external_vm.pyshow_summary.py
add_data_center.py   add_vm_snapshot.py
import_vm.py start_vm.py
add_floating_disk.py add_vm_with_sysprep.py
list_affinity_labels.py  start_vm_with_boot_devices.py
add_group.py add_vnc_console.py
list_glance_images.pystart_vm_with_cloud_init.py
add_host.py  assign_affinity_label_to_vm.py
list_roles.pystop_vm.py
add_independet_vm.py assign_permission.py
list_tags_of_vm.py   test_connection.py
add_instance_type.py assign_tag_to_vm.py
 list_tags.py unassign_tag_to_vm.py
add_mac_pool.py  attach_nfs_data_storage_domain.py
 list_vm_disks.py update_data_center.py
add_nfs_data_storage_domain.py   attach_nfs_iso_storage_domain.py
list_vm_snapshots.py update_fencing_options.py
add_nfs_iso_storage_domain.pychange_vm_cd.py
 list_vms.py  update_quota_limits.py
add_openstack_image_provider.py  clone_vm_from_snapshot.py
 page_vms.py  upload_disk.py
add_role.py  connection_builder.py
 remove_host.py   vm_backup.py
add_tag.py   disable_compression.py
remove_tag.py
add_user_ssh_public_key.py   download_disk.py
remove_vm.py
add_vm_disk.py   enable_serial_console.py
search_vms.py
(base) 
kris@my-ThinkPad-X270:~/OVIRT_PYTON_SDK_SOURCE_FILES/ovirt-engine-sdk-python-4.1.3/examples$


Can I execute now the following from my laptop ? so that it will connect to
the rhevm host node and download the disks ?

(base) kris@my-ThinkPad-X270:$ python3 download_disk.py  -c engine-dev
MY_vm_blah_Id /var/tmp/disk1.raw  //is this correct ?

My laptop doesn't have space to accommodate 300 GB  so can I attache a usb
harddisk and can I specify its 

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Edward Berger
 I don't know if you can just remove the gluster-rdma rpm.

I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
mellanox tar/iso and
running the mellanox install script after adding the required dependencies
with --enable-repo,
which isn't the same as adding a repository and 'dnf install'.  So I would
try that on a test host.

I use it for the 'virtual infiniband' interfaces that get attached to VMs
as 'host device passthru'.

I'll note the node versions of gluster are 7.8 (node 4.4.4.0/CentOS8.3) and
7.9 (node 4.4.4.1/CentOS8.3).
unlike your glusterfs version 6.0.x

I'll be trying to install mellanox ofed on node 4.4.7.1 (CentOS 8 stream)
soon to see how that works out.



On Wed, Aug 4, 2021 at 10:04 PM Vinícius Ferrão via Users 
wrote:

> Hello,
>
> Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each
> other?
>
> The real issue is regarding GlusterFS. It seems to be a Mellanox issue,
> but I would like to know if there's something that we can do make both play
> nice on the same machine:
>
> [root@rhvepyc2 ~]# dnf update --nobest
> Updating Subscription Management repositories.
> Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM
> -03.
> Dependencies resolved.
>
>  Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch
> and mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
>   - cannot install the best update candidate for package
> glusterfs-rdma-6.0-49.1.el8.x86_64
>   - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but
> none of the providers can be installed
>   - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes
> glusterfs-rdma provided by glusterfs-rdma-6.0-49.1.el8.x86_64
>   - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires
> glusterfs(x86-64) = 3.12.2-40.2.el8, but none of the providers can be
> installed
>   - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-15.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-20.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) =
> 6.0-37.el8, but none of the providers can be installed
>   - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64)
> = 6.0-37.2.el8, but none of the providers can be installed
>   - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-15.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-20.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-37.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install both glusterfs-6.0-37.2.el8.x86_64 and
> glusterfs-6.0-49.1.el8.x86_64
>   - cannot install the best update candidate for package
> ovirt-host-4.4.7-1.el8ev.x86_64
>   - cannot install the best update candidate for package
> glusterfs-6.0-49.1.el8.x86_64
>
> =
>  PackageArchitectureVersion
>Repository
>   Size
>
> =
> Installing dependencies:
>  openvswitchx86_64
> 2.14.1-1.54103mlnx_ofed_5.4-1.0.3.0_base
> 17 M
>  ovirt-openvswitch  noarch  2.11-1.el8ev
> rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms
>   8.7 k
>  replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
>  unboundx86_64  1.7.3-15.el8
> rhel-8-for-x86_64-appstream-rpms
>  895 k
> Skipping packages with conflicts:
> (add '--best --allowerasing' to command line to force their upgrade):
>  glusterfs  x86_64
> 3.12.2-40.2.el8   rhel-8-for-x86_64-baseos-rpms
> 558 k
>  glusterfs  x86_64  6.0-15.el8
> rhel-8-for-x86_64-baseos-rpms
>   658 k
>  glusterfs  x86_64  6.0-20.el8
> rhel-8-for-x86_64-baseos-rpms
>   659 k
>  glusterfs  x86_64  6.0-37.el8
> rhel-8-for-x86_64-baseos-rpms
>   663 k
>  glusterfs  x86_64  6.0-37.2.el8
> rhel-8-for-x86_64-baseos-rpms
>   662 k
> Skipping packages with broken dependencies:
>  glusterfs-rdma x86_64
> 3.12.2-40.2.el8   rhel-8-for-x86_64-baseos-rpms
>  49 k
>  glusterfs-rdma  

[ovirt-users] Re: Is there a way to support Mellanox OFED with oVirt/RHV?

2021-08-05 Thread Strahil Nikolov via Users
As far as I know rdma is deprecated ong glusterfs, but it most probably works.
Best Regards,Strahil Nikolov
 
 
  On Thu, Aug 5, 2021 at 5:05, Vinícius Ferrão via Users 
wrote:   Hello,

Is there a way to keep Mellanox OFED and oVirt/RHV playing nice with each other?

The real issue is regarding GlusterFS. It seems to be a Mellanox issue, but I 
would like to know if there's something that we can do make both play nice on 
the same machine:

[root@rhvepyc2 ~]# dnf update --nobest
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:25 ago on Wed 04 Aug 2021 02:01:11 AM -03.
Dependencies resolved.

 Problem: both package mlnx-ofed-all-user-only-5.4-1.0.3.0.rhel8.4.noarch and 
mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsolete glusterfs-rdma
  - cannot install the best update candidate for package 
glusterfs-rdma-6.0-49.1.el8.x86_64
  - package ovirt-host-4.4.7-1.el8ev.x86_64 requires glusterfs-rdma, but none 
of the providers can be installed
  - package mlnx-ofed-all-5.4-1.0.3.0.rhel8.4.noarch obsoletes glusterfs-rdma 
provided by glusterfs-rdma-6.0-49.1.el8.x86_64
  - package glusterfs-rdma-3.12.2-40.2.el8.x86_64 requires glusterfs(x86-64) = 
3.12.2-40.2.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-15.el8.x86_64 requires glusterfs(x86-64) = 
6.0-15.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-20.el8.x86_64 requires glusterfs(x86-64) = 
6.0-20.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.el8, but none of the providers can be installed
  - package glusterfs-rdma-6.0-37.2.el8.x86_64 requires glusterfs(x86-64) = 
6.0-37.2.el8, but none of the providers can be installed
  - cannot install both glusterfs-3.12.2-40.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-15.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-20.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install both glusterfs-6.0-37.2.el8.x86_64 and 
glusterfs-6.0-49.1.el8.x86_64
  - cannot install the best update candidate for package 
ovirt-host-4.4.7-1.el8ev.x86_64
  - cannot install the best update candidate for package 
glusterfs-6.0-49.1.el8.x86_64
=
 Package                            Architecture            Version             
             Repository                                                Size
=
Installing dependencies:
 openvswitch                        x86_64                  2.14.1-1.54103      
              mlnx_ofed_5.4-1.0.3.0_base                                17 M
 ovirt-openvswitch                  noarch                  2.11-1.el8ev        
              rhv-4-mgmt-agent-for-rhel-8-x86_64-rpms                  8.7 k
    replacing  rhv-openvswitch.noarch 1:2.11-7.el8ev
 unbound                            x86_64                  1.7.3-15.el8        
              rhel-8-for-x86_64-appstream-rpms                        895 k
Skipping packages with conflicts:
(add '--best --allowerasing' to command line to force their upgrade):
 glusterfs                          x86_64                  3.12.2-40.2.el8     
             rhel-8-for-x86_64-baseos-rpms                            558 k
 glusterfs                          x86_64                  6.0-15.el8          
              rhel-8-for-x86_64-baseos-rpms                            658 k
 glusterfs                          x86_64                  6.0-20.el8          
              rhel-8-for-x86_64-baseos-rpms                            659 k
 glusterfs                          x86_64                  6.0-37.el8          
              rhel-8-for-x86_64-baseos-rpms                            663 k
 glusterfs                          x86_64                  6.0-37.2.el8        
              rhel-8-for-x86_64-baseos-rpms                            662 k
Skipping packages with broken dependencies:
 glusterfs-rdma                    x86_64                  3.12.2-40.2.el8      
            rhel-8-for-x86_64-baseos-rpms                            49 k
 glusterfs-rdma                    x86_64                  6.0-15.el8           
             rhel-8-for-x86_64-baseos-rpms                            46 k
 glusterfs-rdma                    x86_64                  6.0-20.el8           
             rhel-8-for-x86_64-baseos-rpms                            46 k
 glusterfs-rdma                    x86_64                  6.0-37.2.el8         
             rhel-8-for-x86_64-baseos-rpms                            48 k
 glusterfs-rdma    

[ovirt-users] Re: ISO Upload in in Paused by System Status

2021-08-05 Thread Vojtech Juranek
On Thursday, 5 August 2021 00:43:15 CEST lou...@ameritech.net wrote:
> I'm attempting to upload an ISO that is approximately 9GB in size.  I've
> succssfullly started th upload process via the oVirt Management
> Console/Disk.  The upload started however, it now has a status of "Paused
> by System".  My storage type is set to NFS Data.  


there is a bug [1] (fix recently), that you have to click "Test connection" 
before upload, otherwise you upload gets stuck.

I'd recommend clicking "Test connection" button anyway to verify connection 
works and your certificates are setup correctly.

> Is something happing the back ground that contributing to the "Paused by
> System"?   I have mor the 84TB of space available so I don't think it is a
> space issue.  Is there something that I need to do?  At this time I'm going
> to wait and see if it moves forward on its own.  
 
> Please provide me with any help or direction.

You can check imageio logs on /var/log/ovirt-imageio/daemon.log both on engine 
and on the host to see if there are any issues and eventually also engine log 
/var/log/ovirt-engine/engine.log


[1] https://bugzilla.redhat.com/show_bug.cgi?id=1950593



> Thanks
> 
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2CYDZITCYRPW
> JIXHP32H6G3K2QI6OLQ/



signature.asc
Description: This is a digitally signed message part.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VBJQLGYBWYHSB3M66VN6HDUZ7KSZIFUF/


[ovirt-users] Re: ISO Upload in in Paused by System Status

2021-08-05 Thread Yedidyah Bar David
On Thu, Aug 5, 2021 at 9:20 AM Tommy Sway  wrote:
>
> I've had this problem before, and it happened when the CA was configured 
> correctly.
>
>
>
> Then try again, it worked.
>
>
>
> I don't know why until today.

Me neither :-(

If this is reproducible, we can try to understand the root cause and fix it.
If you (or someone else) can't reproduce, but files a bug and attaches
all relevant logs, we can still try to diagnose, based on the logs.

We _did_ have issues around this, see also:

https://bugzilla.redhat.com/show_bug.cgi?id=1637809 (not really a bug,
more an RFE even though it's not marked so, but definitely helped a
lot since)

https://lists.ovirt.org/archives/list/users@ovirt.org/thread/NZBGRCKW6WA4WISTCRDJIFUCMNIKJ2CG/#ZYR3ZJU5V57356DFSZ6BNCWBGX5Q6PP5
(a long similar thread. Thanks to all the participants!).

I am not aware of any open issues right now. "Issues" might be real
bugs somewhere, or things that are so hard to configure right that a
significant number of users err while doing this. If you know about
one, please report it - file a bug with an accurate flow and/or all
relevant bugs. Thanks!

Best regards,
--
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5ESE6IIOEZSYMIA4OMAFEFDVE7JPN3OS/


[ovirt-users] Re: Restored engine backup: The provided authorization grant for the auth code has expired.

2021-08-05 Thread Strahil Nikolov via Users
If the system is boot from SAN, you can just present the LUNs to the new 
host.You might need to boot in the full initramfs (the bottom entry in grub) 
and rebuild all initramfs images.
Best Regards,Strahil Nikolov
 
 
  On Wed, Aug 4, 2021 at 13:45, Yedidyah Bar David wrote:   
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZACZLK3PVF34VTN2N3VXKQWUHR35EQBT/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3GG7BTRXHAHJGFATPO4IOCA3EFYKJDAH/


[ovirt-users] Re: Question about pci passthrough for guest (SSD passthrough) ?

2021-08-05 Thread Strahil Nikolov via Users
You won't be able to migrate the VM from the host, but it should work.

Best Regards,Strahil Nikolov
 
 
  On Wed, Aug 4, 2021 at 12:49, Tony Pearce wrote:   I have 
recently added a fresh installed host on 4.4, with 3 x nvidia gpu's which have 
been passed through to a guest VM instance. This went very smoothly and the 
guest can use all 3 host GPUs. 
The next thing we did was to configure "local storage" so that the single guest 
instance can make use of faster nvme storage (100,000 iops) compared to the 
network iscsi storage which is rated at 35,000 iops. 
The caveat with local storage is that I can only use the remaining free space 
in /var/ for disk images. The result is the 1TB SSD has around 700GB remaining 
free space.
So I was wondering about simply passing through the nvme ssd (PCI) to the 
guest, so the guest can utilise the fill SSD. 
Are there any "gotcha's" with doing this other than the usual gpu passthrough 
ones? 
Also my apologies if this is duplicated. I originally asked this [1] a couple 
of days ago but I am not sure what happened. 
Kind regards, 

Tony Pearce

[1] Question about pci pass-thru - Users - Ovirt List Archives
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/WJKUWVIJCZT7LBSU5QI43GGTPEYIQSQN/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EUNJQS6VZYI2JOX7EQ3WSVNYUJJJKIOF/


[ovirt-users] Re: ISO Upload in in Paused by System Status

2021-08-05 Thread Tommy Sway
I've had this problem before, and it happened when the CA was configured 
correctly.  

 

Then try again, it worked. 

 

I don't know why until today.  

 

 

 

From: users-boun...@ovirt.org  On Behalf Of Yedidyah 
Bar David
Sent: Thursday, August 5, 2021 1:40 PM
To: lou...@ameritech.net
Cc: users 
Subject: [ovirt-users] Re: ISO Upload in in Paused by System Status

 

On Thu, Aug 5, 2021 at 1:44 AM mailto:lou...@ameritech.net> > wrote:

I'm attempting to upload an ISO that is approximately 9GB in size.  I've 
succssfullly started th upload process via the oVirt Management Console/Disk.  
The upload started however, it now has a status of "Paused by System".  My 
storage type is set to NFS Data.  

Is something happing the back ground that contributing to the "Paused by 
System"?   I have mor the 84TB of space available so I don't think it is a 
space issue.  Is there something that I need to do?  At this time I'm going to 
wait and see if it moves forward on its own.  

Please provide me with any help or direction.

 

Do you use the internal CA or an external one?

If an external, did you strictly follow the procedure to replace the CA?

Did you import the CA cert to your browser?

 

Please search the list archives for similar issues. Thanks.

 

Good luck and best regards,

-- 

Didi

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4THGWA5YXNKMMTVVF7O7G3SK3TNNO2HD/