Hello Marek, hello everyone,
I'm sorry I didn't update you earlier. Unfortunately, we had a key team
member leave our team, which pushed back our release by some time. We are
still pursuing the matter according to the original plan and will release
the TF provider, but we will need some more time
What kind of Storage domain do you use ?
Best Regards,Strahil Nikolov
Hi,
One of our VM is pause due to lack of storage space while the VM is in snapshot
deleting task. Now can't restart or shutdown the VM, and can't delete the
snapshot.
How i can fix this problem? Any help is much appre
Apologies for the delay
yes sir all folders and the uid/gid of the gluste vol is 36
Thanks
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oV
yes sir all hosts, volumes, and bricks have this setting
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct:
https://www.ovir
Hi,
One of our VM is pause due to lack of storage space while the VM is in snapshot
deleting task. Now can't restart or shutdown the VM, and can't delete the
snapshot.
How i can fix this problem? Any help is much appreciated!
Thank you in advanced!
Regards,
Victor
Hi,
We are currently building a three node hyper-converged cluster based on oVirt
Node and Gluster. While discussing the different storage layout we couldn't get
to a final decision.
Currently our servers are equipped as follows:
- servers 1 & 2:
- Two 800GB disks for OS
- 100GB RAID1 use
Hi Vojta
My LVM version on the hypervisor is:
udisks2-lvm2-2.9.0-7.el8.x86_64
llvm-compat-libs-12.0.1-4.module_el8.6.0+1041+0c503ac4.x86_64
lvm2-libs-2.03.14-2.el8.x86_64
lvm2-2.03.14-2.el8.x86_64
libblockdev-lvm-2.24-8.el8.x86_64
LVM on VM:
[root@nextcloud ~]# rpm -qa | grep lvm
lvm2-libs-2.03.
Hi Vojta,
my storage domains are GlusterFS. I will boot in rescue to see the LVM version
on the VMs themselves.
Yet, the 2 VMs I tested with, have a different LVM version (EL7 vs EL8).
When I "ls" from the grub rescue I see only:
(hd0) ) (hd0,msdos2) (hd0,msdos1) (md/boot)
As only hd0 is visi
Hi,
> Hi All,
> I recently migrated from 4.3.10 to 4.4.9 and it seems that booting from
> software raid0 (I have multiple gluster volumes) is not possible with
> Cluster compatibility 4.6 . I've tested creating a fresh VM and it also
> suffers the problem. Changing various options (virtio-scsi to
On 07/01/2022 07:57, Ritesh Chikatwar wrote:
try downgrading in all host and give try
I completed the downgrade and the system seems recovered. Thanks Ritesh,
you saved my weekend!
What about future upgrades? Any clue on what is going bad on recent qemu
packages?
Thanks again,
Andrea
--
An
10 matches
Mail list logo