for the
VMs.
If the snapshot operations buttonsappear but grayed out then this
implies that the disk images or snapshots are locked due to a previous
VM operation.
Thanks,
Sharon
On Mon, May 8, 2023 at 3:24 PM Christoph Köhler
mailto:koeh...@luis.uni-hannover.de>> wrote:
Hi!
On a fresh Version 4.5.4-1.el8: in the vm portal are no snapshot
operations possible - not for an user_vm_manager and not for a
super_user. We have imported VMs from 4.3 with exisiting snapshots.
These are listed for the users in the snapshot box but there is also no
operation possible
Hello,
on CentOS Stream the deployment of HE fails with the following message:
[ INFO ] TASK [ovirt.ovirt.engine_setup : Gather facts on installed
packages]
[ INFO ] ok: [localhost -> 192.168.1.239]
[ INFO ] TASK [ovirt.ovirt.engine_setup : Fail when firewall manager is
not installed]
[
I use ovirt 4.3.10 and need to get access to a glance repo for vm
images/templates. The buld-in external provider 'Public Glance
repository for oVirt' (glance.ovirt.org) does not work, it seems not to
be maintained.
Does someone knows what to do?
Greetings from
Christoph
smime.p7s
-c "\x on" -c "SELECT * FROM images where
> image_group_id = '02240cf3-65b6-487c-b5af-c266a1dd18f8';"
>
> On Mon, Nov 9, 2020 at 5:18 PM Christoph Köhler
> mailto:koeh...@luis.uni-hannover.de>> wrote:
>
> yes sure, here they are:
>
> On 09.1
Hello experts,
perhaps someone has an idea about that error. It appears when in try to
migrate a disk to another storage, and this live. Generally it works
good but - this is the log snippet:
HSMGetAllTasksStatusesVDS failed: Error during destination image
manipulation:
Hey,
it happens in our ovirt 4.3.9 occasionally that after reboot one of the
dedicated gluster nodes (replica3, arbiter 1) some client connections
are missing on server side:
# gluster volume status gluvol3 clients
Client connections for volume gluvol3
Hello!
I have installed metrics store along this documentation:
https://ovirt.org/documentation/metrics-install-guide/Installing_Metrics_Store.html
Everything went fine without errors, so I have now the login page in my
browser. The Problem is that the credentials for login seems to be not
3, 2020 at 12:05 PM Christoph Köhler
mailto:koeh...@luis.uni-hannover.de>> wrote:
Hello Jayme,
the gluster-config is this:
gluster volume info gluvol3
Volume Name: gluvol3
Type: Replicate
Volume ID: 8172ebea-c118-424a-a407-50b2fd87e372
Status: Started
Sn
configurations. This is one of the main
reasons why I have been unable to implement libgfapi.
On Mon, Feb 3, 2020 at 10:53 AM Christoph Köhler
mailto:koeh...@luis.uni-hannover.de>> wrote:
Hi,
since we have updated to 4.3.7 and another cluster to 4.3.8 snapshots
are not longer po
Hi,
since we have updated to 4.3.7 and another cluster to 4.3.8 snapshots
are not longer possible. In previous version all went well...
° libGfApi enabled
° gluster 6.7.1 on gluster-server and client
° libvirt-4.5.0-23.el7_7.3
vdsm on a given node says:
jsonrpc/2) [vds] prepared volume
Hey,
is the problem back with qemu/dynamic_ownership?
We have one DC with two clusters: one: vdsm-4.30.38 / libvirt-4.5.0-23,
two: vdsm-4.30.40 / libvirt-4.5.0-23, LibgfApi enabled, Gluster-Server
6.5.1, Gluster-Client 6.7, opVersion 6
It went well for a log time but now, with getting
Hey,
since we had moved the HE to another storage with
hosted-engine --delpoy --restore-from-file=...
we have a running engine (4.3.7.2-1.el7) - fine!
But I get every 6 minutes an error from dhw:
2020-01-16
c38c5f,
vmId=3b79d0c0-47e9-47c3-8511-980a8cfe147c
(api:52)
On 18.07.19 10:42, Benny Zlotnik wrote:
It should work, what is the engine and vdsm versions?
Can you add vdsm logs as well?
On Thu, Jul 18, 2019 at 11:16 AM Christoph Köhler
mailto:koeh...@luis.uni-hannover.de>> wrote:
'd2964ff9-10f7-4b92-8327-d68f3cfd5b50' of vm
'3b79d0c0-47e9-47c3-8511-980a8cfe147c', attempting to end replication
before deleting the target disk
//
LiveStorageMigration on gluster - should that work at all? Has someone
tried it?
Greetings!
Christoph Köhler
(of three) is upgraded? And what about the
sequence - first the hypervisors or first gluster nodes?
Is there anyone who had done this?
Greetings!
Christoph Köhler
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
<mailto:ovirt_cinderlib@localhost>:5432
Best Regards,
Strahil Nikolov
В понеделник, 1 юли 2019 г., 9:42:18 ч. Гринуич-4, Christoph Köhler
написа:
Hello,
we tried to migrate the hosted engine to a new storage but we ran into
an error - in any attempt the same.
What we did is:
° V
Hello,
we tried to migrate the hosted engine to a new storage but we ran into
an error - in any attempt the same.
What we did is:
° Version 4.3.4.3-1.el7
° in the engine vm: systemctl stop ovirt-engine
° took backup with scope=all
° hosted-engine --set-maintenance --mode=global
°
Hello,
does someone have experience with cephfs as a vm-storage domain? I think
about that but without any hints...
Thanks for pointing me...
--
Christoph Köhler
Leibniz Universität IT Services
Schloßwender Straße 5, 30159 Hannover
Tel.: +49 511 762 794721
koeh...@luis.uni-hannover.de
http
19 matches
Mail list logo