Hi:
I use novnc console can connect to engine vm.
But when I using novnc console connect to other vm in other datacenter,
failed.
The ovirt-websocket-proxy log is:
Jan 19 15:43:32 ooeng.tltd.com ovirt-websocket-proxy.py[1312]:
192.168.10.104 - - [19/Jan/2021 15:43:32] connecting to:
Ceph support is available via Managed Block Storage (tech preview), it
cannot be used instead of gluster for hyperconverged setups.
Moreover, it is not possible to use a pure Managed Block Storage setup
at all, there has to be at least one regular storage domain in a
datacenter
On Mon, Jan 18, 20
On Mon, Jan 18, 2021 at 10:29 PM penguin pages wrote:
>
> I was avoiding reloading the OS. This to me was like "reboot as a fix"...
> to wipe environment out and restart, vs repair where I learn how to debug.
There is no simple and reliable way to undo 'hosted-engine --deploy'.
It's not design
Il giorno lun 18 gen 2021 alle ore 20:04 Strahil Nikolov <
hunter86...@yahoo.com> ha scritto:
> Most probably it will be easier if you stick with full-blown distro.
>
> @Sandro Bonazzola can help with CEPH status.
>
Letting the storage team have a voice here :-)
+Tal Nisan , +Eyal Shenitzky , +
Faster than fuse-rbd, not qemu.
Main issue is kernel pagecache and client upgrades, for example cluster with
700 osd and 1000 clients we need update client version for new features. With
current oVirt realization we need update kernel then reboot host. With librbd
we just need update package and
Hi
It's a hardware san dell compellent. Corruption seems to appear 12days agos.
Vérité, shutting down system, it was running like a charm.
--
Lionel Caignec
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Hm... this sounds bad . If it was deleted by oVirt, it would ask you whether to
remove the disk or not and would wipe the VM configuration.
Most probably you got a data corruption there . Are you using TrueNAS ?
Best Regards,
Strahil Nikolov
В вторник, 19 януари 2021 г., 00:06:15 Гринуич+
Hi,
i've a big problem, i juste shutdown (power off completely) a guest to make a
cold restart. And at startup the guest say : "Cannot access backing file
'/rhev/data-center/mnt/blockSD/69348aea-7f55-41be-ae4e-febd86c33855/images/8224b2b0-39ba-44ef-ae41-18fe726f26ca/ca141675-c6f5-4b03-98b0-03122
I was avoiding reloading the OS. This to me was like "reboot as a fix"... to
wipe environment out and restart, vs repair where I learn how to debug.
But after weeks... I am running out of time.
___
Users mailing list -- users@ovirt.org
To unsubscribe
Most probably the dwh is far in the future.
The following is not the correct procedure , but it works:
ssh root@engine
su - postgres
source /opt/rh/rh-postgresql10/enable
psql engine
engine=# select * from dwh_history_timekeeping ;
Best Regards,
Strahil Nikolov
В понеделник, 18 януари 202
I think that it's complaining for the firewall. Try to restore with running
firewalld.
Best Regards,
Strahil Nikolov
В понеделник, 18 януари 2021 г., 17:52:04 Гринуич+2, penguin pages
написа:
Following document to redploy engine...
https://access.redhat.com/documentation/en-us/re
Most probably it will be easier if you stick with full-blown distro.
@Sandro Bonazzola can help with CEPH status.
Best Regards,Strahil Nikolov
В понеделник, 18 януари 2021 г., 11:44:32 Гринуич+2, Shantur Rathore
написа:
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are
Are you sure that ovirt doesn't still use it (storage domains)?
Best Regards,
Strahil Nikolov
В понеделник, 18 януари 2021 г., 09:11:18 Гринуич+2, Christian Reiss
написа:
Update:
I found out that 4a62cdb4-b314-4c7f-804e-8e7275518a7f is an iscsi target
outside of gluster. It is a te
Complete logs can be found here:
vdsm.log - https://paste.c-net.org/DownloadPressure
supervdsm.log - https://paste.c-net.org/LaterScandals
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: h
Hello,
Had a problem with the engine server, the clock changed to 2026 and now I don't
have any report on the dashboard.
The version is 4.2.3.8-1.el7
Any idea?
Thanks
--
Jose Ferradeira
http://www.logicworks.pt
___
Users mailing list -- user
In case its useful here is the mount occurring.
[root@brick setup_debugging_logs]# while :; do mount | grep stumpy ; sleep 1;
done
stumpy:/tanker/ovirt/host_storage on
/rhev/data-center/mnt/stumpy:_ta
After looking into logs.. I think issue is about storage where it should
deploy. Wizard did not seem to focus on that.. I A$$umed it was aware of
volume per previous detected deployment... but...
2021-01-18 10:34:07,917-0500 INFO otopi.ovirt_hosted_engine_setup.ansible_utils
ansible
Thanks for pointing that out to me Konstantin.
I understand that it would use a kernel client instead of userland rbd lib.
Isn't it better as I have seen kernel clients 20x faster than userland??
I am probably missing something important here, would you mind detailing
that.
Regards,
Shantur
On
Hi Didi,
I did log clean up and am re-running ovirt-hosted-engine-cleanup &&
ovirt-hosted-engine-setup to get you cleaner log files.
searching for host_storage in vdsm.log...
**snip**
2021-01-18 08:43:18,842-0700 INFO (jsonrpc/3) [api.host] FINISH getStats
return={'status': {'code': 0, 'messag
Following document to redploy engine...
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.1/html/self-hosted_engine_guide/cleaning_up_a_failed_self-hosted_engine_deployment
From Host which had listed engine as in its inventory ###
[root@medusa ~]# /usr/sbin/ovirt-host
Beware about Ceph and oVirt Managed Block Storage, current integration is only
possible with kernel, not with qemu-rbd.
k
Sent from my iPhone
> On 18 Jan 2021, at 13:00, Shantur Rathore wrote:
>
>
> Thanks Strahil for your reply.
>
> Sorry just to confirm,
>
> 1. Are you saying Ceph on o
its fine now. i just shutdown and start the engine using hosted-engine
--vm-shutdown hosted-engine --vm-start
thank you very much
On Mon, 18 Jan 2021, 21:19 Yedidyah Bar David, wrote:
> After you removed the hosts, and before you added them again: Did you
> reinstall the OS? If not, there might
After you removed the hosts, and before you added them again: Did you
reinstall the OS? If not, there might be some mess left behind.
I suggest to try again: Remove a host, reinstall the OS on it, then
add it back and make sure you choose "Deploy". Then give it some time
to update (should not be mo
Dear oVirt team,
On Thursday, 12 December 2019 16:53:41 CET Pavel Nakonechnyi wrote:
>
> > > however, I was not able to find any clues where in particular it is
> > > implemented...
> > >
> > > Once this is understood, it will be possible to consider altering the
> > > corresponding code to incl
When attaching a disk it is not possible to set the disk order nor modify the
order later.
Example:
A new VM is provisioned with 5 disks, Disk0 is the OS and then later attached
disks by order up to Disk4.
Removing Disk3 and then later attaching does not promise it will be attached
back as Disk3
Thanks Strahil for your reply.
Sorry just to confirm,
1. Are you saying Ceph on oVirt Node NG isn't possible?
2. Would you know which devs would be best to ask about the recent Ceph
changes?
Thanks,
Shantur
On Sun, Jan 17, 2021 at 4:46 PM Strahil Nikolov via Users
wrote:
> В 15:51 + на 17
Update & Fix:
There were remnant entries of filters in both
/etc/lvm/lvm.conf and
/etc/multipath.conf
that worked well with the Gluster LVM but went crazy with iscsi lvm mounts.
Fixed the entries, rebooted the server. Now it works and it is back up.
Cheerio!
-Chris.
On 18/01/2021 08:10,
But on every KVM hosts, using fence_xvm command success.
[root@ohost1 ~]# fence_xvm -o list
1.ovs1 7fd9b01e-236c-4d08-9c07-ad0b710139e2 off
1.ovs2 6b0e73b1-649a-470d-9c1b-ca6919e0514d off
2.host1 476e5157-1211-4701-
Hi,
I solved this issue.
dnf config-manager —set-disabled spp
For whatever reason HP hardware drivers repo conflicted with oVirt node install.
> On 15 Jan 2021, at 17:30, Andrei Verovski wrote:
>
> Hi,
>
> After installing fresh CentOS stream (on HP ProLiant Node) with recommended
> partit
29 matches
Mail list logo