[ovirt-users] Re: Recombining disks under a single VM

2022-12-10 Thread matthew.st...@fujitsu.com
Scratch this query.  It appears we had two identical images.  It wouldn't 
permit importing due to matching disk id's.

From: matthew.st...@fujitsu.com 
Sent: Saturday, December 10, 2022 10:24 PM
To: users@ovirt.org
Subject: [ovirt-users] Recombining disks under a single VM

Up until now, we've been using iSCSI as the storage on all of our datacenters, 
and when we need to bulk move VMs, we moved iSCSI storage domains.

We've just setup our first datacenter with Fiber Channel storage, and no 
connectivity to our iSCSI storage.   For this move we have fallen back to using 
NFS storage domains, to move the necessary VMs from datacenter ONE to 
datacenter TWO.

The bulk of the transfer went fine, but for about one dozen two disk VMs both 
disks did not get moved at the same time.  So now I have one dozen VMs with 
only one drive on datacenter TWO and the other drive as an un-importable VM on 
the NFS storage partition, mounted on datacenter TWO.

Any hints on how I "merge" these two, back into one function VM?

==  matthew.st...@fujitsu.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EEY2GCQYBPQPKIUYZNTCQX77KFVPOLIT/


[ovirt-users] Recombining disks under a single VM

2022-12-10 Thread matthew.st...@fujitsu.com
Up until now, we've been using iSCSI as the storage on all of our datacenters, 
and when we need to bulk move VMs, we moved iSCSI storage domains.

We've just setup our first datacenter with Fiber Channel storage, and no 
connectivity to our iSCSI storage.   For this move we have fallen back to using 
NFS storage domains, to move the necessary VMs from datacenter ONE to 
datacenter TWO.

The bulk of the transfer went fine, but for about one dozen two disk VMs both 
disks did not get moved at the same time.  So now I have one dozen VMs with 
only one drive on datacenter TWO and the other drive as an un-importable VM on 
the NFS storage partition, mounted on datacenter TWO.

Any hints on how I "merge" these two, back into one function VM?

==  matthew.st...@fujitsu.com
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NATOQ4VBV4D3NKMNV67FB2HRJDBGCUNF/


[ovirt-users] Re: Combining vNUMA and dedicated CPU Pinning Policy

2022-12-10 Thread Gianluca Amato
Hi Lucia, 
thanks for you suggestion. I made several experiments with and without 
hugepages. It seems that, when oVirt calculates the amount of available memory 
on the NUMA nodes, it does not keep into consideration the reserved hugepages. 
For example, this is the result of "numactl --hardware" on my host in this 
moment:


node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 
48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 
100 102 104 106 108 110
node 0 size: 257333 MB
node 0 free: 106431 MB
node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 
49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 79 81 83 85 87 89 91 93 95 97 99 
101 103 105 107 109 111
node 1 size: 257993 MB
node 1 free: 142009 MB
node distances:
node   0   1 
  0:  10  21 
  1:  21  10 
---

I am able to launch my VM only if I set its Memory Size to be 210 GB or less 
(i.e., the double of the available RAM in node 0, which is the node with less 
available RAM). I tried several times with different amount of free memory, and 
this behavior seems to be consistent. Note that, once started, the VM consumes 
the pre-allocated reserved hugepages, and has almost no impact on the free 
memory.

Do you think there is a reason why oVirt behaves in this ways, or is this a bug 
that I could signal in the github repo ?

Thanks a lot again,
--gianluca

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BG25VHK4YIHSNJFI7CV4GW7OIUSAN74V/


[ovirt-users] Re: oVirt 2.5.1.2 HCI Gluster Recovery Options

2022-12-10 Thread Clint Boggio
Good Day Strahil;

The OS did survive. The OS was on a RAID1 array and the gluster bricks are on a 
RAID5 array. Since the time I requested guidance I did a whole lot of reading 
and learning. I found in /etc/ansible several yaml files that contained the 
configuration of the gluster bricks and the lvm volumes contained within them. 
I also found that lvm configurations can be restored from /etc/lvm/backup and 
/etc/lvm/archive. All very helpfull I must say.

I re-created the RAID array and did a light initialization to wipe the array. 
Then I attempted to restore the backups from /etc/lvm/backup and 
/etc/lvm/array. I was unable to inspire the thinpool to enable with the "manual 
repair required" error each time. As there was not enough unused space on the 
system to cycle out the metadata lvm I was unable to proceed.

I then deleted the LVM configuration and re-created the LVM volumes from the 
/etc/ansible yaml files I found. I made them identical to the other two nodes 
that were functioning in the cluster. I have them succesfully mounted via fstab 
with the new UUID that were generated when I recreated the volumes. 

Based on the documents and articles I have read and studied these past few 
weeks, at this point I believe that I need to re-apply the appropriate 
gluster-related attributes to the correct level of the gluster mounts and 
attempt to start the gluster service. I have not done this yet because I don't 
want to cause another outage and I want to be sure that what I am doing will 
result in the afflicted gluster volumes healing and sychronising with the 
healthy nodes.

1. What attributes and at what level should I apply them to the refurbished 
node ?
2. What problems could I encounter when I try to start the gluster service 
after making the changes ?
3. Should I remove the new node from the gluster cluster and then re-add it or 
will it heal in it's current refurbished state ?

I appreciate you taking time to check on the status of this situation and thank 
you for any help or insight you can provide.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UEIP7GKIG3IMT2UMPTLYV475XMVRULOE/