On Wed, Mar 16, 2022 at 5:41 PM Pascal D wrote:
>
> One issue I have with hosted-engine is that when something goes wrong it has
> a domino effect because hosted-engine cannot communicate with its database.
> I have been thinking to host ovirt engine separately on a different
> hypervisor and
Hi, 4.5.0 Alpha was released yesterday!
As for oVirt 4.4 test day we have a trello board at
https://trello.com/b/3FZ7gdhM/ovirt-450-test-day .
If you have troubles accessing the trello board please let me know.
A release management draft page has been created at:
https://www.ovirt.org/release/4.5.
Thanks Strahil,
The Environment is as follows:
oVirt Open Virtualization Manager:
Software Version:4.4.9.5-1.el8
oVirt Node:
OS Version: RHEL - 8.4.2105.0 - 3.el8
OS Description: oVirt Node 4.4.6
GlusterFS Version: glusterfs-8.5-1.el8
The Volumes are Arbiter (2+1) volumes so split brain should
Stale file handle is an indication of a split brain situation. On a 3-way
replica, this could only mean gfid mismatch (gfid is unique id for each file in
gluster).
I think those .prob can be deleted safely, but I am not fully convinced.
What version of oVirt are you using ? What about gluster
I would go this way:1. Backup2. Reinstall the host (as you migrate from 4.3. to
4.4 we need EL8)3. use command provided in previous e-mail (hosted-engine ...).
to deploy to the iSCSI. Ensure the old HE VM is offline and all hosts see the
iSCSI becore the restore.4. If the deployment is OK and th
Check the perl script from https://forums.centos.org/viewtopic.php?t=73634
According to http://elrepo.org/tiki/DeviceIDs you should run "lspci -n | grep
'03:00.0' " and then search for the vendor:device ID pair .
http://elrepoproject.blogspot.com/2019/08/rhel-80-and-support-for-removed-adapters.ht
2 days ago I found that 2 of the 3 oVirt nodes had been set to
'Non-Operational'. GlusterFS seemed to be ok from the commandline, but the
oVirt engine WebUI was reporting 2 out of 3 bricks per volume as down and event
logs were filling up with the following types of messages.
**
One issue I have with hosted-engine is that when something goes wrong it has a
domino effect because hosted-engine cannot communicate with its database. I
have been thinking to host ovirt engine separately on a different hypervisor
and have all my host undeployed. However for efficiency my netw
On Wed, Mar 16, 2022 at 4:45 PM Demeter Tibor wrote:
>
> Dear Didi,
> Thank you for your reply.
>
> My glusterfs uses the host's internal disks. I have 4 hosts, but the
> glusterfs use only 3. It is a centos7 based system.
>
> As I think, I have to elminate the glusterfs first, because I can't u
Dear Didi,
Thank you for your reply.
My glusterfs uses the host's internal disks. I have 4 hosts, but the glusterfs
use only 3. It is a centos7 based system.
As I think, I have to elminate the glusterfs first, because I can't upgrade the
hosts until the engine running on there. That's
On Wed, Mar 16, 2022 at 1:39 PM Demeter Tibor wrote:
>
> Dear Users,
>
> I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but
> meanwhile I would like to change the storage backend under the engine. At
> this moment it is a gluster based clustered fs, but I don't really like
>
Dear Users,
I have to upgrade our hyperconverged ovirt system from 4.3 to 4.4, but
meanwhile I would like to change the storage backend under the engine. At this
moment it is a gluster based clustered fs, but I don't really like it I
would like to change to a hardver based iscsi storag
I am checking the mega SAS drivers from elrepo one by one but it is getting
"modprobe: ERROR: Could not insert 'megaraid-sas' : Invalid argument. I am
confused with which driver will support my controller
03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Subsystem: Dell PERC
hereafter running lspci command in centos 7
[root@localhost riyaz]# lspci | grep RAID
03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
[root@localhost riyaz]# lspci -k -s 03:00.0
03:00.0 RAID bus controller: Broadcom / LSI MegaRAID SAS 1078 (rev 04)
Subsystem: Dell P
14 matches
Mail list logo