On Tue, Sep 22, 2020 at 6:46 PM Philip Brown wrote:
>
> Chrome didnt want to talk AT ALL to ovirt with self-signed certs (Because
> HSTS is enabled)
>
> So I installed signed wildcard certs to the engine, and the nodes, following
>
> http://187.1.81.65/ovirt-engine/docs/manual/en-US/html/Adminis
Vincent,
This document will be useful
https://www.ovirt.org/documentation/upgrade_guide/#Upgrading_the_Manager_to_4-4_4-3_SHE
On Wed, Sep 23, 2020, 3:55 AM Vincent Royer wrote:
> I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster. How do I
> upgrade to 4.4? Is there a guide?
> _
eMail client with this forum is a bit .. I was told this web interface
I could post images... as embedded ones in email get scraped out... but not
seeing how that is done. Seems to be txt only.
1) ..."I would give the engine a 'Windows'-style fix (a.k.a. reboot)" how
does one restart
I have 3 nodes running node ng 4.3.9 with a gluster/hci cluster. How do I
upgrade to 4.4? Is there a guide?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-
On Tue, Sep 22, 2020 at 11:23 PM Strahil Nikolov wrote:
>
> In my setup , I got no filter at all (yet, I'm on 4.3.10):
> [root@ovirt ~]# lvmconfig | grep -i filter
We create lvm filter automatically since 4.4.1. If you don't use block storage
(FC, iSCSI) you don't need lvm filter. If you do, you
In my setup , I got no filter at all (yet, I'm on 4.3.10):
[root@ovirt ~]# lvmconfig | grep -i filter
[root@ovirt ~]#
P.S.: Don't forget to 'dracut -f' due to the fact that the initramfs has a
local copy of the lvm.conf
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 23:05:2
On Tue, Sep 22, 2020 at 11:05 PM Jeremey Wise wrote:
>
>
>
> Correct.. on wwid
>
>
> I do want to make clear here. that to geta around the error you must ADD
> (not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete
> setup of drives.
>
> [root@thor log]# cat /etc/lvm/lvm.con
On Tue, Sep 22, 2020 at 10:57 PM Strahil Nikolov wrote:
>
> Obtaining the wwid is not exactly correct.
It is correct - for nvme devices, see:
https://github.com/oVirt/vdsm/blob/353e7b1e322aa02d4767b6617ed094be0643b094/lib/vdsm/storage/lvmfilter.py#L300
This matches the way that multipath lookup
Correct.. on wwid
I do want to make clear here. that to geta around the error you must ADD
(not remove ) drives to /etc/lvm/lvm.conf so oVirt Gluster can complete
setup of drives.
[root@thor log]# cat /etc/lvm/lvm.conf |grep filter
# Broken for gluster in oVirt
#filter =
["a|^/dev/disk/by-id/
Hmm.
that seems to be half the battle.
I updated the filels in /etc/pki/vdsm/libvirt-spice, and the debug output from
remote-viewer changes.. but its not entirely happy.
(remote-viewer.exe:15808): Spice-WARNING **: 12:55:01.188:
../subprojects/spice-common/common/ssl_verify.c:444:openssl_verify
Most probably there is an option to tell it (I mean oVIrt) the exact keys to be
used.
Yet, give the engine a gentle push and reboot it - just to be sure you are not
chasing a ghost.
I'm using self-signed certs and I can't help much in this case.
Best Regards,
Strahil Nikolov
В вторник,
Obtaining the wwid is not exactly correct.
You can identify them via:
multipath -v4 | grep 'got wwid of'
Short example:
[root@ovirt conf.d]# multipath -v4 | grep 'got wwid of'
Sep 22 22:55:58 | nvme0n1: got wwid of
'nvme.1cc1-324a31313230303131343036-414441544120535838323030504e50-0001'
Sep
Thanks for the initial start, Strahil,
my desktop is windows. but I took apart the console.vv file, and these are my
findings:
in the console.vv file, there is a valid CA cert, which is for the signing CA
for our valid wildcard SSL cert.
However, when I connected to the target host, on the tls
I assume you are working on linux (for windows you will need to ssh to a linux
box or even one ofthe Hosts).
When you download the 'console.vv' file for Spice connection - you will have to
note several stuff:
- host
- tls-port (not the plain 'port=' !!! )
- ca
Process the CA and replace the '\
On Tue, Sep 22, 2020 at 1:50 AM Jeremey Wise wrote:
>
>
> Agree about an NVMe Card being put under mpath control.
NVMe can be used via multipath, this is a new feature added in RHEL 8.1:
https://bugzilla.redhat.com/1498546
Of course when the NVMe device is local there is no point to use it
via m
Ovirt uses the "/rhev/mnt... mountpoints.
Do you have those (for each storage domain ) ?
Here is an example from one of my nodes:
[root@ovirt1 ~]# df -hT | grep rhev
gluster1:/engine fuse.glusterfs 100G 19G 82G
19% /rhev/data-center/mnt/glusterSD/gluster1:_engi
More detail on the problem.
after starting remote-viewer --debug, I get
(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.594: New spice channel
0608B240 SpiceMainChannel 0
(remote-viewer.exe:18308): virt-viewer-DEBUG: 11:45:30.594: notebook show
status 03479130
(remote-
On Tue, Sep 22, 2020 at 4:18 AM Jeremey Wise wrote:
>
>
> Well.. to know how to do it with Curl is helpful.. but I think I did
>
> [root@odin ~]# curl -s -k --user admin@internal:blahblah
> https://ovirte01.penguinpages.local/ovirt-engine/api/storagedomains/ |grep
> ''
> data
>
Chrome didnt want to talk AT ALL to ovirt with self-signed certs (Because HSTS
is enabled)
So I installed signed wildcard certs to the engine, and the nodes, following
http://187.1.81.65/ovirt-engine/docs/manual/en-US/html/Administration_Guide/appe-Red_Hat_Enterprise_Virtualization_and_SSL.html
oVirt 4.4 requires EL8.2 , so no you cannot go to 4.4 without upgrading the OS
to EL8.
Yet, you can still bump the version to 4.3.10 which is still EL7 based and it
works quite good.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:39:52 Гринуич+3,
написа:
Hi ever
By the way, did you add the third host in the oVirt ?
If not , maybe that is the real problem :)
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 17:23:28 Гринуич+3, Jeremey Wise
написа:
Its like oVirt thinks there are only two nodes in gluster replication
# Yet
That's really wierd.
I would give the engine a 'Windows'-style fix (a.k.a. reboot).
I guess some of the engine's internal processes crashed/looped and it doesn't
see the reality.
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 16:27:25 Гринуич+3, Jeremey Wise
написа:
Arik / Strahil,
Many thanks!
Just in-case anyone else is hitting the same issue (*NOTE* Host and VM
ID _will_ be different!)
0. Ran a backup:
1. Connect to the hosted-engine and DB:
$ ssh root@vmengine
$ su - postgres
$ psql engine
2. Execute a select query to verify that the VM's run_on_vds is N
Hi everyone,
I am writing for support regarding the ovirt upgrade.
I am using Ovirt with version 4.2 on CentOS 7.4 operating system.
The latest release of the Ovirt engine is 4.4 which is available for CentOS
8.Can I upgrade without upgrading the operating system to centos8?
I would not be wrong
>Ok, May I know why you think it's only a bug in SLES?.
I never claimed it is a bug in SLES, but a bug in Ovirt detecting proper memory
usage in SLES.
The behaviour you observe was normal for RHEL6/CentOS6/SLES11/openSUSE and
bellow , so it is normal for some OSes.In my oVirt 4.3.10 , I see that
Also in some rare cases, I have seen oVirt showing gluster as 2 out of 3 bricks
up , but usually it was an UI issue and you go to UI and mark a "force start"
which will try to start any bricks that were down (won't affect gluster) and
will wake up the UI task to verify again brick status.
http
Usually I first start with:
'gluster volume heal info summary'
Anything that is not 'Connected' is bad.
Yeah, the abstraction is not so nice, but the good thing is that you can always
extract the data from a single node left (it will require to play a little bit
with the quorum of the volume).
when I posted last.. in the tread I paste a roling restart.And... now
it is replicating.
oVirt still showing wrong. BUT.. I did my normal test from each of the
three nodes.
1) Mount Gluster file system with localhost as primary and other two as
tertiary to local mount (like a client woul
At around Sep 21 20:33 local time , you got a loss of quorum - that's not good.
Could it be a network 'hicup' ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 15:05:16 Гринуич+3, Jeremey Wise
написа:
I did.
Here are all three nodes with restart. I find it odd ...
Replication issue could mean that one of the client (FUSE mounts) is not
attached to all bricks.
You can check the amount of clients via:
gluster volume status all client-list
As a prevention , just do a rolling restart:
- set a host in maintenance and mark it to stop glusterd service (I'm reff
I will have a look.
Thank you for your support in oVirt!
On Tue, 22 Sep 2020 at 15:30, Strahil Nikolov wrote:
> Hi Eyal,
>
> thanks for the reply - all the proposed options make sense.
> I have opened a RFE -> https://bugzilla.redhat.com/show_bug.cgi?id=1881457
> , but can you verify that the pr
Any option to extend the Gluster Volume ?
Other approaches are quite destructive. I guess , you can obtain the VM's xml
via virsh and then copy the disks to another pure-KVM host.
Then you can start the VM , while you are recovering from the situation.
virsh -c qemu:///system?authfile=/etc/ovirt
Hi Eyal,
thanks for the reply - all the proposed options make sense.
I have opened a RFE -> https://bugzilla.redhat.com/show_bug.cgi?id=1881457 ,
but can you verify that the product/team are the correct one ?
Best Regards,
Strahil Nikolov
В вторник, 22 септември 2020 г., 12:55:56 Гринуич+3
I did.
Here are all three nodes with restart. I find it odd ... their has been a
set of messages at end (see below) which I don't know enough about what
oVirt laid out to know if it is bad.
###
[root@thor vmstore]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-sys
Hello Strahil,
I just set cluster.min-free-disk to 1%:
# gluster volume info data
Volume Name: data
Type: Distribute
Volume ID: 2d3ea533-aca3-41c4-8cb6-239fe4f82bc3
Status: Started
Snapshot Count: 0
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: node2.domain.com:/home/brick1
On Mon, 21 Sep 2020 at 23:19, Strahil Nikolov wrote:
> Hey Eyal,
>
> it's really irritating that only ISOs can be imported as disks.
>
> I had to:
> 1. Delete snapshot (but I really wanted to keep it)
> 2. Detach all disks from existing VM
> 3. Delete the VM
> 4. Import the Vm from the data domai
Ok, May I know why you think it's only a bug in SLES?.
As I said before, ovirt is behaving the same way even for CentOS7 VMs. I am
attaching the details again here below.
One of running CentOS VM memory details are as below.
[centos@centos-vm1 ~]$ free -m
total used
Ok, solved.
Simply the server node2 could not mount via NFS the data domain of the
node 1. Added node1 in the node2 firewall and in /etc/exports, tested
and everything went fine.
Regards,
Francesco
Il 21/09/2020 17:44, francesco--- via Users ha scritto:
Hi Everyone,
In a test environment I
Hi again Strahil,
It’s oVirt 4.3.10. Same CPU on the entire cluster, it’s three machines with
Xeon E5-2620v2 (Ivy Bridge), all the machines are identical in model and specs.
I’ve changed the VM CPU Model to:
Nehalem,+spec-ctrl,+ssbd
Let’s see how it behaves. If it crashes again I’ll definitely
Hi Gianluca.
On 22 Sep 2020, at 04:24, Gianluca Cecchi
mailto:gianluca.cec...@gmail.com>> wrote:
On Tue, Sep 22, 2020 at 9:12 AM Vinícius Ferrão via Users
mailto:users@ovirt.org>> wrote:
Hi Strahil, yes I can’t find anything recently either. You digged way further
then me, I found some regre
This looks much like my openBSD 6.6 under Latest AMD CPUs. KVM did not accept a
pretty valid instruction and it was a bug in KVM.
Maybe you can try to :
- power off the VM
- pick an older CPU type for that VM only
- power on and monitor in the next days
Do you have a cluster with different cpu
On Tue, Sep 22, 2020 at 9:12 AM Vinícius Ferrão via Users
wrote:
> Hi Strahil, yes I can’t find anything recently either. You digged way
> further then me, I found some regressions on the kernel but I don’t know if
> it’s related or not:
>
> https://patchwork.kernel.org/patch/5526561/
> https://b
Hi Strahil, yes I can’t find anything recently either. You digged way further
then me, I found some regressions on the kernel but I don’t know if it’s
related or not:
https://patchwork.kernel.org/patch/5526561/
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1045027
Regarding the OS, nothi
43 matches
Mail list logo