recently upgraded to 4.2 and had some problems with engine vm running, got
that cleared up now my only remaining issue is that now it seems
ovirt-ha-broker and ovirt-ha-agent are continually crashing on all three of
my hosts. Everything is up and working fine otherwise, all VMs running and
hosted
Explored logs on both hosts.
broker.log shows no errors.
agent.log looking not good:
on host1 (which running hosted engine) :
MainThread::ERROR::2018-01-12
21:51:03,883::agent::205::ovirt_hosted_engine_ha.agent.agent.Agent::(_run_agent)
Traceback (most recent call last):
File
Hi,
the VM is up according to the status (at least for a while). You
should be able to use console and diagnose anything that happened
inside (line the need for fsck and such) now.
Check the presence of those links again now, the metadata file content
is not important, but the file has to exist
Recent versions of livemedia-creator use qemu directly instead of libvirt,
and I think I saw a problem there also, but didn't get to fix it just yet.
You can use virt-builder to install a centos vm, or use an el7-based mock
environment. You can follow the jenkins job here [1], basically you need
> On 12 Jan 2018, at 12:27, Giorgio Biacchi wrote:
>
> It's the same for me also.
>
> Ovirt is connected to FreeIPA this time, so it seems not to be bound to a
> specific AAA engine extension.
>
> Do we have to submit a new bug on bugzilla??
Looks like
Trying to fix one thing I broke another :(
I fixed mnt_options for hosted engine storage domain and installed latest
security patches to my hosts and hosted engine. All VM's up and running,
but hosted_engine --vm-status reports about issues:
[root@ovirt1 ~]# hosted-engine --vm-status
--==
Thanks a lot, Simeone!
hosted-engine --set-shared-config mnt_options
backup-volfile-servers=host1.domain.com:host2.domain.com --type=he_conf
solved my issue!
Regards,
Artem
On Fri, Jan 12, 2018 at 3:39 PM, Simone Tiraboschi
wrote:
>
>
> On Fri, Jan 12, 2018 at 1:22 PM,
> Can you please stop all hosted engine tooling (
On all hosts I should have added.
Martin
On Fri, Jan 12, 2018 at 3:22 PM, Martin Sivak wrote:
>> RequestError: failed to read metadata: [Errno 2] No such file or directory:
>>
> RequestError: failed to read metadata: [Errno 2] No such file or directory:
> '/var/run/vdsm/storage/248f46f0-d793-4581-9810-c9d965e2f286/14a20941-1b84-4b82-be8f-ace38d7c037a/8582bdfc-ef54-47af-9f1e-f5b7ec1f1cf8'
>
> ls -al
>
The blockIoTune error should be harmless. It is just a result of a
data check by other component (mom) that encountered a VM that no
longer exists.
I thought we squashed all the logs like that though..
Martin
On Fri, Jan 12, 2018 at 3:12 PM, Jayme wrote:
> One more thing to
One more thing to add, I've also been seeing a lot of this in the syslog as
well:
Jan 12 10:10:49 cultivar2 journal: vdsm jsonrpc.JsonRpcServer ERROR
Internal server error#012Traceback (most recent call last):#012 File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 606, in
Thanks for the help thus far. Storage could be related but all other VMs
on same storage are running ok. The storage is mounted via NFS from within
one of the three hosts, I realize this is not ideal. This was setup by a
previous admin more as a proof of concept and VMs were put on there that
On Thu, Jan 11, 2018 at 6:15 AM, Jayme wrote:
> I performed Ovirt 4.2 upgrade on a 3 host cluster with NFS shared
> storage. The shared storage is mounted from one of the hosts.
>
> I upgraded the hosted engine first, downloading the 4.2 rpm, doing a yum
> update then engine
On Fri, Jan 12, 2018 at 11:11 AM, Martin Sivak wrote:
> Hi,
>
> the hosted engine agent issue might be fixed by restarting
> ovirt-ha-broker or updating to newest ovirt-hosted-engine-ha and
> -setup. We improved handling of the missing symlink.
>
Available just in oVirt 4.2.1
On Fri, Jan 12, 2018 at 1:22 PM, Artem Tambovskiy <
artem.tambovs...@gmail.com> wrote:
> Hi,
>
> I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
> some time ago. And recently during software upgrade I noticed that I made
> some mistakes during the installation:
>
> if the
Hi,
I have deployed a small cluster with 2 ovirt hosts and GlusterFS cluster
some time ago. And recently during software upgrade I noticed that I made
some mistakes during the installation:
if the host which was deployed first will be taken down for upgrade
(powered off or rebooted) the engine
It's the same for me also.
Ovirt is connected to FreeIPA this time, so it seems not to be bound to a
specific AAA engine extension.
Do we have to submit a new bug on bugzilla??
Regards
On 01/11/2018 02:04 PM, Latchezar Filtchev wrote:
Hi Guys,
The same here. Upgrade 4.1.8 to 4.2. oVirt
Please help, I'm really not sure what else to try at this point. Thank you
for reading!
I'm still working on trying to get my hosted engine running after a botched
upgrade to 4.2. Storage is NFS mounted from within one of the hosts. Right
now I have 3 centos7 hosts that are fully updated with
On Fri, Jan 12, 2018 at 10:12 AM, Nathanaël Blanchet wrote:
> You're right, if I manually stop and restart the vm through the ui, the
> wheel disappears. But, restarting the vm with vagrant, the wheel comes back,
> so I guess it comes from sdk4 action (ansible/vagrant), while I
You're right, if I manually stop and restart the vm through the ui, the
wheel disappears. But, restarting the vm with vagrant, the wheel comes
back, so I guess it comes from sdk4 action (ansible/vagrant), while I
first guessed it came from cloud-init.
Le 11/01/2018 à 19:31, Luca 'remix_tj'
The oVirt Project is pleased to announce the availability of the oVirt
4.2.1 First Release Candidate, as of January 12th, 2017
This update is the first in a series of stabilization updates to the 4.2
series.
This release is available now for:
* Red Hat Enterprise Linux 7.4 or later
* CentOS
Hi,
i have a VM on oVirt 4.2 which is locked after a snapshot revert.
i unlocked the disk/snapshot and vm with unlock_entity.sh
it now shows no locks
/usr/share/ovirt-engine/setup/dbutils/unlock_entity.sh -t all -q
Locked VMs:
Locked templates:
Locked disks:
Locked snapshots:
But the VM
22 matches
Mail list logo