Thanks Strahil, Unfortunately changing the filter is done after the initial
install. We manually partition sda so that sdb isn’t touched during install.
The issues with multipath grabbing sdb are ongoing with a possible manual fix
being tested now. Testing has paused at the moment as we
Thanks Strahil,
Unfortunately changing the filter is done after the initial install.
We manually partition sda so that sdb isn’t touched during install.
The issues with multipath grabbing sdb are ongoing with a possible manual fix
being tested now.
Testing has paused at the moment as we
Most probably there is an LVM filter.As stated in the /etc/multipath.conf , use
a special file to blacklist the local disks without modifying
/etc/multipath.conf
Best Regards,Strahil Nikolov
Hi All,
I have a server with a RAID1 disk for sda and RAID 5 disk for sdb.
Following default
same result on 4.3.6-rc1, and the manual update to ovirt-node-ng-nodectl noarch
4.4.0-0.20190820.0.el7 fixes it there.
Didn't try it on the 'production' 4.3.5 variant, but would assume it works
there, too.
___
Users mailing list -- users@ovirt.org
To
Looks like the fix missed the release, it will be fixed on the next
version. For now, you can try to either install the latest nightly iso,
or manually install the latest ovirt-node-ng-nodectl rpm on your existing
installation
On Monday, August 19, 2019, wrote:
> I concur with Paul, I got the
I concur with Paul, I got the same from fresh installs of the oVirt node image
created August 5th on all instances.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement:
Sorry for the confusion this is with the latest node 4.3.5.2-1.el7
It didn't have this problem in 4.3.4-1
Regards,
Paul S.
From: Yuval Turgeman
Sent: 19 August 2019 16:00
To: Staniforth, Paul
Cc: users@ovirt.org
Subject: Re: [ovirt-users]
You just hit [1] - can you try this with the latest 4.3.5 ?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1728998
On Tue, Aug 13, 2019 at 6:57 PM Staniforth, Paul <
p.stanifo...@leedsbeckett.ac.uk> wrote:
> Hello,
>
> on the latest version of the oVirt-node install running
>
OK, if the icon is there that is a good thing. There would be no icon if
you didn't select deploy.
Its not terribly obvious when first installing a second host that it needs
the deploy part set.
There's something else causing the engine migration to fail. You can dig
through the logs on the
I guess you can log on ovirt2 and run
hosted-engine --set-maintenance --mode=local && sleep 30 && hosted-engine
--vm-status
Then reinstall ovirt2 from the web UI and mark the engine for deployment.
Once the reinstall is over - remove maintenance via :
hosted-engine --set-maintenance --mode=none
hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : tmn1-ovirt1.corp.gseis.ru
Host ID: 1
Engine status : {"health": "good",
and some more logs:
2019-02-26 11:04:28,747+05 INFO
[org.ovirt.engine.core.vdsbroker.monitoring.VmAnalyzer]
(EE-ManagedThreadFactory-engineScheduled-Thread-81) [] VM
'fa3a78de-b329-4a58-8f06-
efd6b0e3c719' is migrating to VDS
'45b1d017-16ee-4e89-97f9-c0b002427e5d'(ovirt2) ignoring it in the
# hosted-engine --vm-status
--== Host 1 status ==--
conf_on_shared_storage : True
Status up-to-date : True
Hostname : ovirt1
Host ID: 1
Engine status : {"health": "good", "vm": "up",
hosted-engine --vm-status --== Host 1 status ==-- conf_on_shared_storage : TrueStatus up-to-date : TrueHostname : ovirt1Host ID : 1Engine status : {"health": "good", "vm": "up", "detail":
What is the output of 'hosted-engine --vm-status' from the host that is hosting
the HostedEngine VM?
Best Regards,
Strahil NikolovOn Feb 26, 2019 05:50, k...@intercom.pro wrote:
>
> The crown icon on the left of the second host is gray. When I try to migrate
> the engine, I get the error:
>
>
The crown icon on the left of the second host is gray. When I try to migrate
the engine, I get the error:
Migration of VM 'HostedEngine' to host 'ovirt2' failed: VM destroyed during
the startup
EVENT_ID: VM_MIGRATION_NO_VDS_TO_MIGRATE_TO(166), No available host was found
to migrate VM
If you haven't "installed" or "reinstalled" the second host without
purposely selecting "DEPLOY" under hosted-engine actions,
it will not be able to run the hosted-engine VM.
A quick way to tell if you did is to look at the hosts view and look for
the "crowns" on the left like this attached pic
in logs:
Candidate host 'ovirt2' ('4086-9cce-365172819c60') was filtered out by
'VAR__FILTERTYPE__INTERNAL' filter 'HA' (correlation id: null)
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy
Thanks for the answer.
Now I have two hosts 4.2.6 and 4.2.8, and the engine 4.2.6. VMs migrate between
these hosts without problems. But the VM with the engine to migrate to host
4.2.8 refuses - he say:
No available Host to migrate to.
Since it cannot migrate, there is no way to put the host
Il mer 20 feb 2019, 06:43 ha scritto:
> Current version my oVirt 4.2.6.
> Maybe I need to update it?
>
4.2.6 engine is supposed to work with 4.2.8 node but yes, better to
upgrade.
If you are not using Gluster I would recommend to upgrade to 4.3 which is
currently supported version
Current version my oVirt 4.2.6.
Maybe I need to update it?
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct:
Download ISO 4.2.8 node
Install.
I tried to add a host to oVirt,and receive this error.
He did not make any changes after install.
rpm -qa | grep dmidecode
python-dmidecode-3.12.2-3.el7.x86_64
dmidecode-3.1-2.el7.x86_64
___
Users mailing list --
Il giorno mar 19 feb 2019 alle ore 09:23 ha scritto:
> Hi all!
>
> The following error occurs during installation oVirt Node 4.2.8:
>
> EVENT_ID: VDS_INSTALL_IN_PROGRESS_ERROR(511), An error has occurred
> during installation of Host hostname_ovirt_node2: Yum Cannot queue package
> dmidecode:
: Ryan Barry
Envoyé : jeudi 17 janvier 2019 03:10
À : Brad Riemann
Cc : jeanbapti...@nfrance.com; users
Objet : Re: [ovirt-users] Re: oVirt Node install - kickstart postintall
The quick answer is that operations in %post must be performed before 'nodectl
init'. If that's done, they'll happily
The quick answer is that operations in %post must be performed before
'nodectl init'. If that's done, they'll happily stick.
On Mon, Jan 14, 2019, 9:29 PM Brad Riemann Can you send over the whole %post segment i via pastebin? I think i've run
> across something similar that i addressed, but just
Can you send over the whole %post segment i via pastebin? I think i've run
across something similar that i addressed, but just want to be sure.
Brad Riemann
Sr. System Architect
Cloud5 Communications
From: jeanbapti...@nfrance.com
Sent: Friday, January 11,
26 matches
Mail list logo