[ovirt-users] Re: HostedEngine Restore woes

2022-08-03 Thread Yedidyah Bar David
On Thu, Aug 4, 2022 at 2:51 AM  wrote:

> Many thanks for your help Didi.
>
> I must've missed the following section you pointed out:
>
> | We do have a section about restoring a backup inside the engine VM,
> | assuming that it's still ok - search for "Overwriting a Self-Hosted
> | Engine from an Existing Backup".
>
> It worked perfectly thanks.
>

Glad to hear that. Thanks for the update!


>
> As for the build of a 3 node environment using Foreman and ansible, it
> takes about 1-2 hours from start to finish.
>

Yes, we do run it routinely in our QE - but I seldom hear about real users
doing that...

And our QE did sometimes find bugs there, that did not affect
'hosted-engine --deploy', but I can't recall even one such bug report from
a real user.

The main practical difference between them, other than the obvious one of
having to provide all answers in a var file beforehand, is that it does not
use our ansible callback for generating the log files. Depending on how you
run ansible, this will likely make it somewhat harder to investigate
problems - with the callback, we log each time an ansible var changed its
value, but without this callback, you rely on the code having enough
'debug' tasks at relevant points.

Best regards,
-- 
Didi
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UIFDG4ZNMVWKGJ4NHTHHVBQGRSXTK4KI/


[ovirt-users] Single Peer in Gluster Cluster Failure caused Storage Domain outage

2022-08-03 Thread simon
Hi All,

We have a 3 node HCI cluster with Gluster 2+1 volumes.

The first node had a hardware memory failure which caused file corruption to 
the engine lv and the server would only boot into maintenance mode.

For some reason glusterd wouldn't start and one of the volumes  became 
inaccessible with the Storage domain going offline. This caused multiple VMs to 
go into a paused or shutdown state.
Putting the host into maintenance mode and then shutting it down was done in an 
attempt to allow gluster to continue across 2 nodes (one being the arbiter). 
Unfortunately this didn't work.

The solution was to do the following:
1. Remove the contents of /var/lib/glusterd except for glusterd.info
2. Start glusterd
3. Peer probe one of the other 2 peers
4. Restart glusterd
5. Cross fingers and toes

Although this was a successful outcome I would like to know why losing 1 
gluster peer caused the outage of a single storage domain and therefore outages 
of VMs with disks on that storage domain.

Kind Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QHA7YMU666W6KKZWZ5U3XFTWIND6ZMEQ/


[ovirt-users] Re: HostedEngine Restore woes

2022-08-03 Thread simon
Many thanks for your help Didi.

I must've missed the following section you pointed out:

| We do have a section about restoring a backup inside the engine VM,
| assuming that it's still ok - search for "Overwriting a Self-Hosted
| Engine from an Existing Backup".

It worked perfectly thanks.

As for the build of a 3 node environment using Foreman and ansible, it takes 
about 1-2 hours from start to finish.

Kind Regards

Simon...
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JFBSFDMHUJAQ24PCDDRFSO3TVYKRHX2O/


[ovirt-users] Re: gluster service on the cluster is unchecked on hci cluster

2022-08-03 Thread Jiří Sléžka

Dne 8/3/22 v 03:06 Strahil Nikolov napsal(a):

I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks

I think https://access.redhat.com/solutions/6644151 should solve the 
problem (you can use a developer subscription to access it).


thanks, I really had 5 hook conflicts

/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "select 
id,name,hook_status,content_type,conflict_status from gluster_hooks 
where conflict_status != 0";


  id  |name| 
hook_status | content_type | conflict_status

--++-+--+-
 517462b4-104d-40d1-ac94-3f8baea8e80b | 30samba-start.sh   | ENABLED 
  | TEXT |   4
 d428056d-f6fd-4e56-a48a-ccbdd273b774 | 30samba-set.sh | ENABLED 
  | TEXT |   4
 a1d8857a-9378-42af-81a8-89a4c75eb52e | 30samba-stop.sh| ENABLED 
  | TEXT |   4
 af362bbf-d1ea-4d5e-ae07-492c7ce0966f | 29CTDBsetup.sh | ENABLED 
  | TEXT |   4
 d3bdf3df-13f1-48d8-92d9-03d09989516f | 29CTDB-teardown.sh | ENABLED 
  | TEXT |   4

(5 rows)

I removed them and then synced gluster hooks in cluster

Also diagnostic step

rpm -qV glusterfs-server

revealed that on one of hosts are some hooks missing

[root@ovirt-hci01 ~]# rpm -qV glusterfs-server
.M...  c /var/lib/glusterd/glusterd.info
missing /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh
missing /var/lib/glusterd/hooks/1/start/post/S29CTDBsetup.sh
missing /var/lib/glusterd/hooks/1/start/post/S30samba-start.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S29CTDB-teardown.sh
missing /var/lib/glusterd/hooks/1/stop/pre/S30samba-stop.sh

I reinstalled glusterfs-server package there

Well, I do this only to change CPU Type in cluster but now when Gluster 
services are checked and I try to change CPU Type I got


"Error while executing action: Cannot update cluster because the update 
triggered update of the VMs/Templates and it failed for the following: 
HostedEngine. To fix the issue, please go to each of them, edit, change 
the Custom Compatibility Version (or other fields changed previously in 
the cluster dialog) and press OK. If the save does not pass, fix the 
dialog validation. After successful cluster update, you can revert your 
Custom Compatibility Version change (or other changes). If the problem 
still persists, you may refer to the engine.log file for further details."


Strange thing and probably bug - this action disables Gluster services 
checkbox in cluster!!! Will try to report it...


Also I have no idea what is wrong with HostedEngine as there is (as I 
can see) no custom settings on it... but I cannot change for example 
memory on it because "There was an attempt to change Hosted Engine VM 
values that are locked."


2022-08-03 22:32:01,436+02 WARN 
[org.ovirt.engine.core.bll.UpdateVmCommand] (default task-3193) 
[b93958d9-b27d-4f1b-97f0-d78312c2d346] Validation of action 'UpdateVm' 
failed for user admin@internal-authz. Reasons: 
VAR__ACTION__UPDATE,VAR__TYPE__VM,VM_CANNOT_UPDATE_HOSTED_ENGINE_FIELD



Cheers,

Jiri



Best Regards,
Strahil Nikolov

On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka
 wrote:
___
Users mailing list -- users@ovirt.org 
To unsubscribe send an email to users-le...@ovirt.org

Privacy Statement: https://www.ovirt.org/privacy-policy.html

oVirt Code of Conduct:
https://www.ovirt.org/community/about/community-guidelines/

List Archives:

https://lists.ovirt.org/archives/list/users@ovirt.org/message/HNGTNEBDBB2GWBYGHSIGNVIUGL4EFWT5/







smime.p7s
Description: Elektronicky podpis S/MIME
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MUD4TF6MT33PNGVCQPLHSABJ3VKX64YG/


[ovirt-users] Re: Six node HCI

2022-08-03 Thread Strahil Nikolov via Users
*identify 
 
 
  On Wed, Aug 3, 2022 at 3:57, Strahil Nikolov via Users 
wrote:   ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BX77O47VPMGCTB5RXRHHZOVHBQD34DKR/
  
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/DRZ2AUAD4P6M57NXG3FXHEUXSIQ5Y75B/


[ovirt-users] [ANN] oVirt 4.5.2 First Release Candidate is now available for testing

2022-08-03 Thread Lev Veyde
oVirt 4.5.2 First Release Candidate is now available for testing

The oVirt Project is pleased to announce the availability of oVirt 4.5.2
First Release Candidate for testing, as of August 3rd, 2022.

This update is the second in a series of stabilization updates to the 4.5
series.
Documentation

   -

   If you want to try oVirt as quickly as possible, follow the instructions
   on the Download  page.
   -

   For complete installation, administration, and usage instructions, see
   the oVirt Documentation .
   -

   For upgrading from a previous version, see the oVirt Upgrade Guide
   .
   -

   For a general overview of oVirt, see About oVirt
   .

Important notes before you try it

Please note this is a pre-release build.

The oVirt Project makes no guarantees as to its suitability or usefulness.

This pre-release must not be used in production.
Installation instructions

For installation instructions and additional information please refer to:

https://ovirt.org/documentation/

This release is available now on x86_64 architecture for:

   -

   CentOS Stream 8
   -

   RHEL 8.6 and derivatives


This release supports Hypervisor Hosts on x86_64:

   -

   oVirt Node NG (based on CentOS Stream 8)
   -

   CentOS Stream 8
   -

   RHEL 8.6 and derivatives


Builds are also available for ppc64le and aarch64.

Experimental builds for CentOS Stream 9 are also provided for Hypervisor
Hosts.

See the release notes [1] for installation instructions and a list of new
features and bugs fixed.

Notes:

- oVirt Appliance is already available based on CentOS Stream 8

- oVirt Node NG is already available based on CentOS Stream 8

Additional Resources:

* Read more about the oVirt 4.5.2 pre-release highlights:
http://www.ovirt.org/release/4.5.2/

* Get more oVirt project updates on Twitter: https://twitter.com/ovirt

* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/


[1] http://www.ovirt.org/release/4.5.1/

[2] http://resources.ovirt.org/pub/ovirt-4.5-pre/iso/


-- 

Lev Veyde

Senior Software Engineer, RHCE | RHCVA | MCITP

Red Hat Israel



l...@redhat.com | lve...@redhat.com

TRIED. TESTED. TRUSTED. 
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GWBR4CJNZLEHCI4ODDLP4YWKAFA6LFVM/


[ovirt-users] Re: Moving raw/sparse disk from NFS to iSCSI fails on oVirt 4.5.1

2022-08-03 Thread Guillaume Pavese
Hello,

Just upping my message.
We have a frequent use of importing libvirt vms with sparsified qcow2
disks, so we have hit the import bug to iSCSI a lot of times already. Is it
an unsupported use case?
Same question about moving a raw/sparse disk from NFS volume to iSCSI.

Thanks for any feedback


Guillaume Pavese
Ingénieur Système et Réseau
Interactiv-Group


On Wed, Jul 27, 2022 at 11:45 AM Guillaume Pavese <
guillaume.pav...@interactiv-group.com> wrote:

> Did anyone have the chance to look at this problem?
>
> It seems that it may be related to another problem we have encountered
> when trying to import a vm with qcow2 disks that have been sparsified first
> (virt-sparsify), from a kvm/libvirt provider to a iSCSI storage domain.
> In those case, we get an error that the import failed, and in the import
> log we can see a similar "qemu-img: error while writing at byte xxx: No
> space left on device"
>
> Obviously, it is not a storage space problem as in both situations we are
> using an iSCSI LUN with ample free space.
>
> Best regards,
>
> Guillaume Pavese
> Ingénieur Système et Réseau
> Interactiv-Group
>
>
> On Thu, Jul 21, 2022 at 11:49 PM Guillaume Pavese <
> guillaume.pav...@interactiv-group.com> wrote:
>
>> On a 4.5.1 DC, I have imported a vm and its disk from an old 4.3 DC
>> (through an export domain if that's relevant)
>>
>> The DC/Cluster compat level is 4.7 and the vm was upgraded to it.
>>
>> "Original custom compatibility version 4.3 of imported VM xxx is not
>> supported. Changing it to the lowest supported version: 4.7."
>>
>>
>> The disk is raw and sparse :
>>
>> raw
>> true
>>
>>
>> I initially put the VM's disks on an NFS storage domain, but I want to
>> move the disks to an iSCSI one
>> However, after copying data for a while the task fails "User  has failed
>> to move disk VM-TEMPLATE-COS7_Disk1 to domain iSCSI-STO-FR-301"
>>
>> in engine.log :
>>
>> qemu-img: error while writing at byte xxx: No space left on device
>>
>>
>> 2022-07-21 08:58:23,240+02 INFO
>>  [org.ovirt.engine.core.vdsbroker.vdsbroker.GetHostJobsVDSCommand]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
>> [65fed1dc-e33b-471e-bc49-8b9662400e5f] FINISH, GetHostJobsVDSCommand,
>> return:
>> {0aa2d519-8130-4e2f-bc4f-892e5f7b5206=HostJobInfo:{id='0aa2d519-8130-4e2f-bc4f-892e5f7b5206',
>> type='storage', description='copy_data', status='failed', progress='79',
>> error='VDSError:{code='GeneralException', message='General Exception:
>> ("Command ['/usr/bin/qemu-img', 'convert', '-p', '-t', 'none', '-T',
>> 'none', '-f', 'raw', '-O', 'qcow2', '-o', 'compat=1.1',
>> '/rhev/data-center/mnt/svc-int-prd-sto-fr-301.hostics.fr:_volume1_ovirt-int-2_data/1ce95c4a-2ec5-47b7-bd24-e540165c6718/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea',
>> '/rhev/data-center/mnt/blockSD/b5dc9c01-3749-4326-99c5-f84f683190bd/images/d3c33cc7-f2c3-4613-84d0-d3c9fa3d5ebd/2c4a0041-b18b-408f-9c0d-971c19a552ea']
>> failed with rc=1 out=b'' err=bytearray(b'qemu-img: error while writing at
>> byte 13639873536: No space left on device\\n')",)'}'}}, log id: 73f77495
>>
>> 2022-07-21 08:58:23,241+02 INFO
>>  [org.ovirt.engine.core.bll.StorageJobCallback]
>> (EE-ManagedScheduledExecutorService-engineScheduledThreadPool-Thread-48)
>> [65fed1dc-e33b-471e-bc49-8b9662400e5f] Command CopyData id:
>> '521bdf57-8379-40ce-a682-af859fb0cad7': job
>> '0aa2d519-8130-4e2f-bc4f-892e5f7b5206' execution was completed with VDSM
>> job status 'failed'
>>
>>
>>
>>
>> I do want the conversion from raw/sparse to qcow2/sparse to happen, as I
>> want to activate incremental backups.
>>
>> I think that it may fail because the virtual size is bigger than the
>> initial size, as I think someone as explained on this list earlier? Can
>> anybody confirm?
>> It seems to be a pretty common use case to support though?
>>
>> Guillaume Pavese
>> Ingénieur Système et Réseau
>> Interactiv-Group
>>
>

-- 


Ce message et toutes les pièces jointes (ci-après le “message”) sont 
établis à l’intention exclusive de ses destinataires et sont confidentiels. 
Si vous recevez ce message par erreur, merci de le détruire et d’en avertir 
immédiatement l’expéditeur. Toute utilisation de ce message non conforme a 
sa destination, toute diffusion ou toute publication, totale ou partielle, 
est interdite, sauf autorisation expresse. L’internet ne permettant pas 
d’assurer l’intégrité de ce message . Interactiv-group (et ses filiales) 
décline(nt) toute responsabilité au titre de ce message, dans l’hypothèse 
ou il aurait été modifié. IT, ES, UK.  

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: