[ovirt-users] nested virtualization and promiscuous mode
I want to run a VM, which will itself be the KVM host of a number of KVM guests. Each of the guests running in that nested environment will have a vNIC with an IP address on the same subnet as the top-level hypervisor (the ovirt node). In VMware vSphere environments I was able to do this by enabling promiscuous mode and forged transmits on the VMware distributed switch port group, as described in this medium article; https://williamlam.com/2013/11/why-is-promiscuous-mode-forged.html I've search a number of old threads here on the ovirt list archives. Many refer to vdsm hooks that don't appear to exist any longer in ovirt 4.5.1. How can I accomplish the same thing in ovirt? ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/FNDL4SGRLTABF37K6IDJOKBET7ZPBZ7X/
[ovirt-users] Re: Import KVM VMs on individual iSCSI luns
On Sat, Jul 2, 2022 at 9:40 AM wrote: > > Greetings, > > Is it possible with oVirt to import existing VMs where the underlying > storage is on raw iSCSI luns and to keep them on those luns? > > The historical scenario is that we have Virtual farms in multiple sites > managed by an ancient Orchestration tool that does not support modern OS's as > the hypervisor. > - In each site, there are clusters of hypervisors/Hosts that have visibility > to the same iSCSI luns. > - Each VM has it's own set of iscsi luns that are totally dedicated to that VM > - Each VM is using LVM to manage the disk > - Each Host has LVM filtering configured to NOT manage the VM's iscsi luns > - The VMs can be live migrated from any Hypervisor within the cluster to any > other Hypervisor in that same cluster > > We are attempting to bring this existing environment into oVirt without > replacing the storage model. > Is there any documentation that will serve as a guide for this scenario? > > In a lab environment, we have successfully > - Added 2 hypervisors (hosts) and oVirt can see their VMs as > external-ovtest1 and external-ovtest2 > - Removed the LVM filtering on the hosts This should not be needed. The lvm filter ensure that the host can manage only the disks used by the host (for example for the boot disk). Other disks (e.g. your LUNs) are not managed by the host, but they are managed by oVirt. > - Created a storage domain that is able to see the iscsi luns, but we have > not yet done the 'add' of each lun Don't create a storage domain, since you want to use the LUNs directly. Adding the LUNs to the storage domain can destroy the data on the LUN. > Is it possible to import these luns as raw block devices without LVM being > layered on top of them? Yes, this is called Direct LUN in oVirt. > Is it required to actually import the luns into a storage domain, or can the > VM's still be imported if all luns are visible on all hosts in the cluster? There is no way to import the VM as is, but you can recreate the VM with the same LUNs. > In the grand scheme of things, are we trying to do something that is not > possible with oVirt? > If it is possible, we would greatly appreciate tips, pointers, links to docs > etc that will help us migrate this environment to oVirt. You can do this: 1. Connect to the storage server with the relevant LUNs. The LUNs used by the VM should be visible in engine UI (New storage domain dialog). 2. Create a new VM using the same configuration you had in the original VM 3. Attach the right LUNs to the VM (using Direct LUN) 4. In the VM, make sure you use the right disks - the best way is to use: /dev/disk/by-id/{virtaio}-{serial} When {serial} is the disk UUID seen in engine UI for the direct LUN. {virtaio} is correct if you connect the disks using virtio, if you use virtio-scsi the string will be different. You may also need to install extra components on the boot disk, like qemu-guest-agent or virtio drivers. Note that oVirt does not use the LUNs directly, but the multipath device on top of the SCSI device. This should be transparent, but be prepared to see /dev/mapper/{wwid} instead of /dev/sdXXX. Nir ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/NW5HEA7YWS5OPPYOZLUQHQQEB4MKZC7U/
[ovirt-users] Keep showing: ovirt kernel: NFS: __nfs4_reclaim_open_state: Lock reclaim failed!
I have two same systems: 1 standalone engine + 1 node + 1 NFS storage. One system works well, but the other one, when starting a vm, always shows paused due to storage I/O error. Any one help me thanks ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/5KUWDHCQ5OX2ZQSVAGGJDHHPOPJWCQRQ/
[ovirt-users] Import KVM VMs on individual iSCSI luns
Greetings, Is it possible with oVirt to import existing VMs where the underlying storage is on raw iSCSI luns and to keep them on those luns? The historical scenario is that we have Virtual farms in multiple sites managed by an ancient Orchestration tool that does not support modern OS's as the hypervisor. - In each site, there are clusters of hypervisors/Hosts that have visibility to the same iSCSI luns. - Each VM has it's own set of iscsi luns that are totally dedicated to that VM - Each VM is using LVM to manage the disk - Each Host has LVM filtering configured to NOT manage the VM's iscsi luns - The VMs can be live migrated from any Hypervisor within the cluster to any other Hypervisor in that same cluster We are attempting to bring this existing environment into oVirt without replacing the storage model. Is there any documentation that will serve as a guide for this scenario? In a lab environment, we have successfully - Added 2 hypervisors (hosts) and oVirt can see their VMs as external-ovtest1 and external-ovtest2 - Removed the LVM filtering on the hosts - Created a storage domain that is able to see the iscsi luns, but we have not yet done the 'add' of each lun Is it possible to import these luns as raw block devices without LVM being layered on top of them? Is it required to actually import the luns into a storage domain, or can the VM's still be imported if all luns are visible on all hosts in the cluster? In the grand scheme of things, are we trying to do something that is not possible with oVirt? If it is possible, we would greatly appreciate tips, pointers, links to docs etc that will help us migrate this environment to oVirt. Thanks in Advance - S ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWVOAQEZJHZUIVJ4BOHP464WKMGZFLI7/
[ovirt-users] Stuck in Manager upgrade. Can't set Cluster to maintenance mode.
Hi Folks, like many others, the ovirt hosted engine certificates expired on my installation. We tried to follow this knowledge base article: https://access.redhat.com/solutions/6865861 I set the host which runs the hosted engine via the "hosted-engine --set-maintenance --mode=global" command into the global maintenance mode. Then i try to execute the "engine-setup --offline" command. There we anwser all questions and the script recognizes the expired certificates. But when we try to execute the last step it aborts with following error message. Output of engine-setup --offline: [WARNING] Failed to read or parse '/etc/pki/ovirt-engine/keys/apache.p12' Perhaps it was changed since last Setup. Error was: Mac verify error: invalid password? One or more of the certificates should be renewed, because they expire soon, or include an invalid expiry date, or they were created with validity period longer than 398 days, or do not include the subjectAltName extension, which can cause them to be rejected by recent browsers and up to date hosts. See https://www.ovirt.org/develop/release-management/features/infra/pki-renew/ for more details. Renew certificates? (Yes, No) [No]: Yes --== APACHE CONFIGURATION ==-- --== SYSTEM CONFIGURATION ==-- --== MISC CONFIGURATION ==-- --== END OF CONFIGURATION ==-- [ INFO ] Stage: Setup validation During execution engine service will be stopped (OK, Cancel) [OK]: Ok [ ERROR ] It seems that you are running your engine inside of the hosted-engine VM and are not in "Global Maintenance" mode. In that case you should put the system into the "Global Maintenance" mode before running engine-setup, or the hosted-engine HA agent might kill the machine, which might corrupt your data. [ ERROR ] Failed to execute stage 'Setup validation': Hosted Engine setup detected, but Global Maintenance is not set. [ INFO ] Stage: Clean up Log file is located at /var/log/ovirt-engine/setup/ovirt-engine-setup-20220701205812-yu1osl.log [ INFO ] Generating answer file '/var/lib/ovirt-engine/setup/answers/20220701205843-setup.conf' [ INFO ] Stage: Pre-termination [ INFO ] Stage: Termination [ ERROR ] Execution of setup failed Any ideas how to get the hosted engine into global maintenance mode? Thanks for your help in advance! Best Regard J. Lutz ___ Users mailing list -- users@ovirt.org To unsubscribe send an email to users-le...@ovirt.org Privacy Statement: https://www.ovirt.org/privacy-policy.html oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/ List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/message/OWSYUQDLDQX7D2KIY6GKMWGOEBMS3ZPM/