Dear all,
does anyone have an idea how to address this?
Thank you and all the best,
Simone
-Ursprüngliche Nachricht-
Von: Bruckner, Simone
Gesendet: Mittwoch, 27. März 2019 13:25
An: users@ovirt.org
Betreff: [ovirt-users] Multiple Active VM before the preview" snapshots
Hi,
Hi,
we see some VMs that show an inconsistent view of snapshots. Checking die
database for one example vm shows the following result:
engine=# select snapshot_id, status, description from snapshots where vm_id =
'40c0f334-dac5-42ad-8040-e2d2193c73c0';
snapshot_id |
Hi,
worked!
Thank you very much and all the best,
Simone
-Ursprüngliche Nachricht-
Von: Shmuel Melamud
Gesendet: Sonntag, 15. Juli 2018 13:31
An: Bruckner, Simone
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] VM stuck in "Migrating to"
Hi!
As I understand, the VM is do
Hi,
running engine-setup did not resolve the issue.
All the best,
Simone
Von: Maton, Brett
Gesendet: Sonntag, 15. Juli 2018 13:59
An: Bruckner, Simone
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] VM stuck in "Migrating to"
You could also run engine-setup on hosted-en
Hi all,
I have a VM stuck in state "Migrating to". I restarted ovirt-engine and
rebooted all hosts, no success. I run ovirt 4.2.4.5-1.el7 on CentOS 7.5 hosts
with vdsm-4.20.32-1.el7.x86_64. How can I clean this up?
Thank you and all the best,
Simone
___
Hi Nir,
I identified the reason for the failing OVF updates on the initial VG –
metadata was affected by blkdiscard tests in scope of
https://bugzilla.redhat.com/show_bug.cgi?id=1562369
However, the OVF updates are failing on other installations as well (on 2 out
of 40 storage domains). Here
Hi,
I have defined thin LUNs on the array and presented them to the oVirt hosts.
I will change the LUN from thin to preallocated on the array (which is
transparent to the oVirt host).
Besides removing “discard after delete” from the storage domain flags, is there
anything else I need to take
Hi all,
we had an unexpected shutdown of one of our hypervisor nodes caused by a
hardware problem. We ran "Confirm that that host has beed rebooted" and as long
as the host is in maintenance mode, we see 0 vms running. But when we activate
the host, it shows 14 vms running. How can we get thi
d-a9b4-d4cc65c48429 | d59a9f9d-f0dc-48ec-97e8-9e7a8b81d76d |
4659b5e0-93c1-478d-97d0-ec1cf4052028 | 946ee7b7-0770-49c9-ac76-0ce95a433d0f | t
Is there a way to recover that disk?
All the best,
Simone
Von: users-boun...@ovirt.org Im Auftrag von Bruckner,
Simone
Gesendet: Sonntag, 18. März
Hi all,
we did a live storage migration of one of three disks of a vm that failed
because the vm became not responding when deleting the auto-snapshot:
2018-03-16 15:07:32.084+01 |0 | Snapshot 'VMNAME_Disk2 Auto-generated
for Live Storage Migration' creation for VM 'VMNAME' was initia
?
All the best,
Simone
Von: Shani Leviim [mailto:slev...@redhat.com]
Gesendet: Sonntag, 11. März 2018 14:09
An: Bruckner, Simone
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain
Hi Simone,
Sorry for the delay replying you.
Does the second storage domain you
: Bruckner, Simone
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Faulty multipath only cleared with VDSM restart
Hi Simone,
The multipath health is built on VDSM start from the current multipath state,
and after that it is maintained based on events sent by udev.
You can read about the implementation
Hi,
we see some VMs that show an inconsistent view of snapshots. Checking die
database for one example vm shows the following result:
select snapshot_id, status, description from snapshots where vm_id =
'420a6445-df02-da6a-e4e3-ddc451b2914d';
snapshot_id | status
Hi,
after rebooting SAN switches we see faulty multipath entries in VDSM.
Running vdsm-client Host getStats shows multipathHealth entries
"multipathHealth": {
"3600601603cc04500a2f9cd597080db0e": {
"valid_paths": 2,
"failed_paths": [
"sdcl",
"sdde"
]
},
...
Runni
im Auftrag von
"Bruckner, Simone [simone.bruck...@fabasoft.com]
Gesendet: Dienstag, 06. März 2018 10:19
An: Shani Leviim
Cc: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain
Hi Shani,
please find the logs attached.
Thank you,
Simone
Von: Shani Leviim [mailto:sl
Hello, I apologize for bringing this one up again, but does anybody know if
there is a change to recover a storage domain, that cannot be activated?
Thank you,
Simone
Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von
Bruckner, Simone
Gesendet: Freitag, 2. März 2018
one
Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von
Bruckner, Simone
Gesendet: Donnerstag, 1. März 2018 17:57
An: users@ovirt.org
Betreff: Re: [ovirt-users] Cannot activate storage domain
Hi,
we are still struggling getting a storage domain online again. We tried
exist:
(u'b83c159c-4ad6-4613-ba16-bab95ccd10c0',))
Any ideas?
Thank you and all the best,
Simone
Von: users-boun...@ovirt.org [mailto:users-boun...@ovirt.org] Im Auftrag von
Bruckner, Simone
Gesendet: Mittwoch, 28. Februar 2018 15:52
An: users@ovirt.org
Betreff: [ovirt-users] Cannot a
Bruckner, Simone möchte die Nachricht "[ovirt-users] Cannot activate host from
maintenance mode" zurückrufen.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users
Hi all,
we run a small oVirt installation that we also use for automated testing
(automatically creating, dropping vms).
We got an inactive FC storage domain that we cannot activate any more. We see
several events at that time starting with:
VM perftest-c17 is down with error. Exit mes
Hi all,
I'm trying to update my oVirt installation but cbs.centos.org (referenced by
ovirt-4.0-dependencies.repo) seems to be down. Any ideas when it will be up
again?
All the best,
Simone Bruckner
___
Users mailing list
Users@ovirt.org
http://list
ilto:users-boun...@ovirt.org] Im Auftrag von Michal Skrivanek
Gesendet: Freitag, 23. September 2016 16:57
An: Bruckner, Simone
mailto:simone.bruck...@fabasoft.com>>
Cc: users@ovirt.org<mailto:users@ovirt.org>
Betreff: Re: [ovirt-users] Cannot change Cluster Compatibility Version when a
VM
Hi all,
I am trying to upgrade an oVirt installation (3.6.7.5-1.el6) to 4.0. My
datacenters and clusters have 3.5 compatibility settings.
I followed the instructions from
http://www.ovirt.org/documentation/migration-engine-3.6-to-4.0/ but cannot
proceed in engine-setup as 3.5 compatibility i
23 matches
Mail list logo