[ovirt-users] Re: Where to get the vm config file ?

2021-05-28 Thread Strahil Nikolov via Users
Also, kn each powerup of the VM, the xml file is stored in vdsm's logs.
Best Regards,Strahil Nikolov
 
 
  On Fri, May 28, 2021 at 7:32, dhanaraj.ramesh--- via Users 
wrote:   https://access.redhat.com/solutions/795203

In case of RHEV, these files are not stored under /etc/libvirt/qemu
The vdsm daemon dynamically fetches the information of the VM from the 
RHEV-Managers database to generate the XML files.
These files cannot be edited to make persistent changes, as they only exist 
throughout the lifecycle of the VM, however, they can be viewed read-only by 
using the following command or dumped to a location for viewing later (when the 
VM is poweredOFF)

virsh -r dumpxml vm_name > /tmp/vm_name
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X7A4MYOLZRTKXLGYTOBMUUF4H7SWRVTZ/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PPMCLTR45DIRU3RYEJ5K4EQR2JBJLZYP/


[ovirt-users] Re: Hosted-engine fail and host reboot

2021-05-28 Thread Strahil Nikolov via Users
Maybe you can remove 6900/tcp from firewalld and try again ?
Best Regards,Strahil Nikolov
 
 
  On Thu, May 27, 2021 at 19:43, Dominique D 
wrote:   it seems to be this problem 

I tried to install it again with version 4.4.6-2021051809 and I get this 
message. 

[ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Open a port on firewalld]
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "ERROR: 
Exception caught: org.fedoraproject.FirewallD1.Exception: ALREADY_ENABLED: 
'6900:tcp' already in 'public' Non-permanent operation"}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DASEBWHNO2RT2QNH23KYD7ENNSLFWYLN/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5XR62KVSSEUNPUXOI7MLRU6KKYTPKW3Q/


[ovirt-users] Can't remove snapshot

2021-05-28 Thread David Johnson
Hi all,

I patched one of my Windows VM's yesterday.  I started by snapshotting the
VM, then applied the Windows update.  Now that the patch has been tested, I
want to remove the snapshot. I get this message:

Error while executing action:

win-sql-2019:

   - Cannot remove Snapshot. The following attached disks are in ILLEGAL
   status: win-2019-tmpl_Disk1 - please remove them and try again.


Does anyone have any thoughts how to recover from this? I really don't want
to keep this snapshot hanging around.

Thanks in advance,

*David Johnson*
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PJODGOB2MI6EQQHSJSRXFWRZGJXMZH6P/


[ovirt-users] Re: After upgrade only 1/3 hosts is running Node 4.4.6

2021-05-28 Thread Jayme
Removing the ovirt-node-ng-image-update package and re-installing it
manually seems to have done the trick. Thanks for pointing me in the right
direction!

On Thu, May 27, 2021 at 9:57 PM Jayme  wrote:

> # rpm -qa | grep ovirt-node
> ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
> python3-ovirt-node-ng-nodectl-4.4.0-1.el8.noarch
> ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch
>
> I removed ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch but yum update
> and check for updates in GUI still show no updates available.
>
> I can attempt re-installing the package tomorrow, but I'm not confident it
> will work since it was already installed.
>
>
> On Thu, May 27, 2021 at 9:32 PM wodel youchi 
> wrote:
>
>> Hi,
>>
>> On the "bad hosts" try to find if there is/are any 4.4.6 rpm installed,
>> if yes, try to remove them, then try the update again.
>>
>> You can try to install the ovirt-node rpm manually, here is the link
>> https://resources.ovirt.org/pub/ovirt-4.4/rpm/el8/noarch/ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>
>>> # dnf install ovirt-node-ng-image-update-4.4.6.3-1.el8.noarch.rpm
>>>
>>
>> PS: remember to use tmux if executing via ssh.
>>
>> Regards.
>>
>> Le jeu. 27 mai 2021 à 22:21, Jayme  a écrit :
>>
>>> The good host:
>>>
>>> bootloader:
>>>   default: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>>   entries:
>>> ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64):
>>>   index: 0
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/vmlinuz-4.18.0-301.1.el8.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1 
>>> rd.lvm.lv=onn_orchard1/swap
>>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.6.3-0.20210518.0+1/initramfs-4.18.0-301.1.el8.x86_64.img
>>>   title: ovirt-node-ng-4.4.6.3-0.20210518.0 (4.18.0-301.1.el8.x86_64)
>>>   blsid: ovirt-node-ng-4.4.6.3-0.20210518.0+1-4.18.0-301.1.el8.x86_64
>>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>>   index: 1
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard1-swap
>>> rd.lvm.lv=onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>>> rd.lvm.lv=onn_orchard1/swap
>>> rhgb quiet boot=UUID=3069e23f-5dd6-49a8-824d-e54efbeeb9a3 rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   root: /dev/onn_orchard1/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   blsid:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>>> layers:
>>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   ovirt-node-ng-4.4.6.3-0.20210518.0:
>>> ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>> current_layer: ovirt-node-ng-4.4.6.3-0.20210518.0+1
>>>
>>>
>>> The other two show:
>>>
>>> bootloader:
>>>   default: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   entries:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0 (4.18.0-240.15.1.el8_3.x86_64):
>>>   index: 0
>>>   kernel:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/vmlinuz-4.18.0-240.15.1.el8_3.x86_64
>>>   args: crashkernel=auto resume=/dev/mapper/onn_orchard2-swap
>>> rd.lvm.lv=onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1 
>>> rd.lvm.lv=onn_orchard2/swap
>>> rhgb quiet boot=UUID=cd9dd412-2acd-4f3d-9b3e-44030153856f rootflags=discard
>>> img.bootid=ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   root: /dev/onn_orchard2/ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>   initrd:
>>> /boot//ovirt-node-ng-4.4.5.1-0.20210323.0+1/initramfs-4.18.0-240.15.1.el8_3.x86_64.img
>>>   title: ovirt-node-ng-4.4.5.1-0.20210323.0
>>> (4.18.0-240.15.1.el8_3.x86_64)
>>>   blsid:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1-4.18.0-240.15.1.el8_3.x86_64
>>> layers:
>>>   ovirt-node-ng-4.4.5.1-0.20210323.0:
>>> ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>> current_layer: ovirt-node-ng-4.4.5.1-0.20210323.0+1
>>>
>>> On Thu, May 27, 2021 at 6:18 PM Jayme  wrote:
>>>
 It shows 4.4.5 image on two hosts and 4.4.6 on one. Yum update shows
 noting available nor does check upgrade in admin GUI.

 I believe these two hosts failed on first install and succeeded on
 second attempt which may have something to do with it. How can I force them
 to update to 4.4.6 image? Would reinstall host do it?

 On Thu, May 27, 2021 at 6:03 PM wodel youchi 
 wrote:

> Hi,
>
> What does "nodectl info" reports on all hosts?
> did you 

[ovirt-users] [ANN] oVirt 4.4.7 First Release Candidate is now available for testing

2021-05-28 Thread Lev Veyde
oVirt 4.4.7 First Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.4.7
First Release Candidate for testing, as of May 27th, 2021.

This update is the seventh in a series of stabilization updates to the 4.4
series.
How to prevent hosts entering emergency mode after upgrade from oVirt 4.4.1

Note: Upgrading from 4.4.2 GA or later should not require re-doing these
steps, if already performed while upgrading from 4.4.1 to 4.4.2 GA. These
are only required to be done once.

Due to Bug 1837864  -
Host enter emergency mode after upgrading to latest build

If you have your root file system on a multipath device on your hosts you
should be aware that after upgrading from 4.4.1 to 4.4.7 you may get your
host entering emergency mode.

In order to prevent this be sure to upgrade oVirt Engine first, then on
your hosts:

   1.

   Remove the current lvm filter while still on 4.4.1, or in emergency mode
   (if rebooted).
   2.

   Reboot.
   3.

   Upgrade to 4.4.7 (redeploy in case of already being on 4.4.7).
   4.

   Run vdsm-tool config-lvm-filter to confirm there is a new filter in
   place.
   5.

   Only if not using oVirt Node:
   - run "dracut --force --add multipath” to rebuild initramfs with the
   correct filter configuration
   6.

   Reboot.

Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

* Red Hat Enterprise Linux 8.4 or similar

* CentOS Stream 8

This release supports Hypervisor Hosts on x86_64 and ppc64le architectures
for:

* Red Hat Enterprise Linux 8.4 or similar

* CentOS Stream 8

* oVirt Node 4.4 based on CentOS Stream 8 (available for x86_64 only)

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available based on CentOS Stream 8

- oVirt Node NG is already available based on CentOS Stream 8

Additional Resources:

* Read more about the oVirt 4.4.7 release highlights:
http://www.ovirt.org/release/4.4.7/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.4.7/

[2] http://resources.ovirt.org/pub/ovirt-4.4-pre/iso/

-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UOWNXJS47YDNA3O6F5AAQPG3AH6TAEAM/


[ovirt-users] Re: Import Ge0-Replicated Storage Domain fails

2021-05-28 Thread Simon Scott
Hi All,

Does anyone have any further input on this please?

Kind regards

Simon...

On 25 May 2021, at 09:26, Ritesh Chikatwar  wrote:


Sas, maybe you have have some thoughts on this

On Tue, May 25, 2021 at 1:19 PM Vojtech Juranek 
mailto:vjura...@redhat.com>> wrote:
(CC Pavel, who recently worked on DR, maybe he will have some thoughts)

On Monday, 24 May 2021 17:56:56 CEST 
si...@justconnect.ie wrote:
> Hi All,
>
> I have 2 independent Hyperconverged Sites/Data Centers.
>
> Site A has a GlusterFS Replica 3 + Arbiter Volume that is Storage Domain
> data2
 This Volume is Geo-Replicated to a Replica 3 + Arbiter Volume at
> Site B called data2_bdt
> I have simulated a DR event and now want to import the Ge0-Replicated volume
> data2_bdt as a Storage Domain on Site B. Once imported I need to import the
> VMs on this volume to run in Site B.

> The Geo-Replication now works perfectly (thanks Strahil) but I haven't been
> able to import the Storage Domain.

> Please can someone point me in the right direction or documentation on how
> this can be achieved.

> Kind Regards
>
> Shimme...
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to 
> users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/ List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/LQCTZS6YTKMME
> 2EHBXJEGUM2WDNSYXEC/

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 
users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A6J63RH74YKX7OCK5RCR5IQOUDSF7GG7/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JMOT5WWGBCAR7RW7L5H3KY6FDK7STDTH/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DZCK664CQ7ZSRO4EKUD2LCGVID3RXIJI/