Hi,
finally this post helped
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CL4MI3IJH6MPDXS3B23FQ3BDJXHHSKAG/
invisible locked entry is missing time_zone in HostedEngine configuration...
/usr/share/ovirt-engine/dbscripts/engine-psql.sh -c "update vm_static
set time_zone='Etc/GM
Go to the host running the HostedEngine VM and dump the xml via virsh.Then
power cycle the engine and check if it fixed the issue with the CPU.
Best Regards,Strahil Nikolov
On Wed, Aug 3, 2022 at 23:58, Jiří Sléžka wrote: Dne
8/3/22 v 03:06 Strahil Nikolov napsal(a):
> I think it's relat
Dne 8/3/22 v 03:06 Strahil Nikolov napsal(a):
I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks
I think https://access.redhat.com/solutions/6644151 should solve the
problem (you can use a developer subscription to access it).
thanks, I really had 5 hook conflicts
/
I think it's related to Compute -> Clusters -> Cluster Name -> Gluster Hooks
I think https://access.redhat.com/solutions/6644151 should solve the problem
(you can use a developer subscription to access it).
Best Regards,Strahil Nikolov
On Wed, Aug 3, 2022 at 1:51, Jiří Sléžka wrote:
but some webhook is registered on host... ovirt-hci.mch.local is
resolvable (through /etc/hosts)
[root@ovirt-hci01 ~]# gluster-eventsapi status
Webhooks:
http://ovirt-hci.mch.local:80/ovirt-engine/services/glusterevents
+---+-+---+
|NODE | NODE STAT
Dne 7/23/22 v 23:53 Strahil Nikolov napsal(a):
Did you identify any errors in the Engine log that could provide any clue ?
unfortunately no.
but funny thing... today I looked into html source of cluster settings
page (via Firefox's web developer console). Gluster checkbox has this
html code
By the way, have you tried to set each host into maintenanceand then
'Reinstall' from the Admin Portal ?
Best Regards,Strahil Nikolov
On Sun, Jul 24, 2022 at 0:53, Strahil Nikolov wrote:
Did you identify any errors in the Engine log that could provide any clue ?
Best Regards,Strahil Niko
Did you identify any errors in the Engine log that could provide any clue ?
Best Regards,Strahil Nikolov
On Wed, Jul 20, 2022 at 16:15, Jiří Sléžka wrote: On
7/19/22 22:40, Strahil Nikolov wrote:
> Then, just ensure that the glusterd.service is enabled on all hosts and
> leave it as it i
On 7/19/22 22:40, Strahil Nikolov wrote:
Then, just ensure that the glusterd.service is enabled on all hosts and
leave it as it is.
If it worries you, you will have to move one of the hosts in another
cluster (probably a new one) and slowly migrate the VMs from the old to
the new one.
Yet, if
Then, just ensure that the glusterd.service is enabled on all hosts and leave
it as it is.
If it worries you, you will have to move one of the hosts in another cluster
(probably a new one) and slowly migrate the VMs from the old to the new
one.Yet, if you use only 3 hosts that can put your VMs i
On 7/16/22 07:53, Strahil Nikolov wrote:
Try first with a single host. Set it into maintenance and check if the
checkmark is available.
setting single host to maintenance didn't change state of the gluster
services checkbox in cluster settings.
If not, try to 'reinstall' (UI, Hosts, Installa
Try first with a single host. Set it into maintenance and check if the
checkmark is available.If not, try to 'reinstall' (UI, Hosts, Installation,
Reinstall) the host. During the setup, it should give you to update if the host
can run the HE and it should allow you to select the checkmark for Gl
Dne 7/14/22 v 21:21 Strahil Nikolov napsal(a):
Go to the UI, select the volume , pres 'Start' and mark the checkbox for
'Force'-fully start .
well, it worked :-) Now all bricks are in UP state. In fact from
commandline point of view all volumes were active and all bricks up all
the time.
A
Go to the UI, select the volume , pres 'Start' and mark the checkbox for
'Force'-fully start .
At least it should update the engine that everything is running .Have you
checked if the checkmark for the Gluster service is available if you set the
Host into maintenance?
Best Regards,Strahil Nikolo
On 7/14/22 14:30, Jiří Sléžka wrote:
On 7/14/22 00:34, Strahil Nikolov wrote:
Well... not yet.
Check if the engine detects the volumes and verify again that all
glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
engine restarted (I also yum update
On 7/14/22 00:34, Strahil Nikolov wrote:
Well... not yet.
Check if the engine detects the volumes and verify again that all
glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
engine restarted (I also yum updated it before), glustereventsd is
runni
Well... not yet.Check if the engine detects the volumes and verify again that
all glustereventsd work.
I would even consider restarting the engine, just to be on the safe side.
What is your oVirt version ? Maybe an update could solve your problem.
Best Regards,Strahil Nikolov
On Wed, Jul 13
On 7/13/22 14:53, Jiří Sléžka wrote:
On 7/12/22 22:28, Strahil Nikolov wrote:
glustereventad will notify the engine when something changes - like a
new volume is created from the cli (or bad things happened ;) ), so it
should be running. >
You can use the workaround from the github issue and re
On 7/12/22 22:28, Strahil Nikolov wrote:
glustereventad will notify the engine when something changes - like a
new volume is created from the cli (or bad things happened ;) ), so it
should be running. >
You can use the workaround from the github issue and reatart the
glustereventsd service.
o
glustereventad will notify the engine when something changes - like a new
volume is created from the cli (or bad things happened ;) ), so it should be
running.
You can use the workaround from the github issue and reatart the
glustereventsd service.
For the vdsm, you can always run '/usr/libexec
On 7/11/22 16:22, Jiří Sléžka wrote:
On 7/11/22 15:57, Strahil Nikolov wrote:
Can you check for AVC denials and the error message like the described
in
https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
thanks for reply, there are two unrelated (qemu-kvm) avc denials lo
On 7/11/22 15:57, Strahil Nikolov wrote:
Can you check for AVC denials and the error message like the described
in https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
thanks for reply, there are two unrelated (qemu-kvm) avc denials logged
(related probably to sanlock reco
Can you check for AVC denials and the error message like the described in
https://github.com/gluster/glusterfs-selinux/issues/27#issue-1097225183 ?
Best Regards,Strahil Nikolov
On Mon, Jul 11, 2022 at 16:44, Jiří Sléžka wrote: Hello,
On 7/11/22 14:34, Strahil Nikolov wrote:
> Can you chec
23 matches
Mail list logo