On Sat, Mar 4, 2017 at 8:42 PM, Bill James wrote:
> I have a hardware node that for whatever reason most of the VM status was
> "?", even though VM was up and running fine.
>
This indicates that the engine failed to monitor the host (thus switching
VMs it knew that were running on it to the 'unk
Please provide the ovirt-hosted-engine-setup log.
Best Regards
On Sat, Mar 4, 2017 at 4:24 PM, Manuel Luis Aznar <
manuel.luis.az...@gmail.com> wrote:
> Hello there again,
>
> The error on the first email was using the repo ovirt-release41.rpm (
> http://resources.ovirt.org/pub/yum-repo/ovirt-re
On Sun, Mar 5, 2017 at 1:03 AM, Marcin Kruk wrote:
> Could somebody give me a links to doc or explain behavior of lv volumes in
> RHV 4 cluster, once they are active another time they are incative, and how
> to corelate the lv volumes names with disks of virtual machines, or other
> disks?
On eng
Could somebody give me a links to doc or explain behavior of lv volumes in
RHV 4 cluster, once they are active another time they are incative, and how
to corelate the lv volumes names with disks of virtual machines, or other
disks?
___
Users mailing list
Hello there again,
The error on the first email was using the repo ovirt-release41.rpm (
http://resources.ovirt.org/pub/yum-repo/ovirt-release41.rpm), so as I were
getting the same error again and again I am currently trying with
ovirt-release41-snapshot.rpm (
http://resources.ovirt.org/pub/yum-re
Hello,
we're installing a small ovirt cluster for hosting some test
environments at work. We're interested in testing VM running
hypervisors (i.e ovirt itself or a small openstack deployment).
Is there any documentation showing how to enable nested virtualization in ovirt?
I've seen that while
Doh!
I assumed vdsm-client was part of vdsm-cli, installing vdsm-client from
the 4.1.0 release repo worked just fine.
regards,
Brett
On 4 March 2017 at 11:54, Sandro Bonazzola wrote:
>
>
> Il 04/Mar/2017 12:21, "Maton, Brett" ha
> scritto:
>
> I'm not sure why it's asking for vdsm-client >=
Il 04/Mar/2017 12:21, "Maton, Brett" ha scritto:
I'm not sure why it's asking for vdsm-client >= 4.18.6
I've got the following vdsm rpms installed:
vdsm-4.19.4-1.el7.centos.x86_64
vdsm-api-4.19.4-1.el7.centos.noarch
vdsm-cli-4.19.4-1.el7.centos.noarch
vdsm-hook-vmfex-dev-4.19.4-1.el7.centos.noa
I'm not sure why it's asking for vdsm-client >= 4.18.6
I've got the following vdsm rpms installed:
vdsm-4.19.4-1.el7.centos.x86_64
vdsm-api-4.19.4-1.el7.centos.noarch
vdsm-cli-4.19.4-1.el7.centos.noarch
vdsm-hook-vmfex-dev-4.19.4-1.el7.centos.noarch
vdsm-jsonrpc-4.19.4-1.el7.centos.noarch
vdsm-py
Hi guys,
I've started an oVirt 4.1 HE deployment on a Broadwell based server.
Then I added to HE a second, older host based on Nehalem. I've
downgraded the cluster CPU type to Nehalem to accommodate host2 and it
finally reached score 3400. However, when I try to migrate HE vm it
fails with th
10 matches
Mail list logo