I have an engine with a similar issue.
You might want to revert to the old self signed cert created by
installation, and then follow the instructions at
https://ovirt.org/documentation/administration_guide/index.html
to try re-installing the third party cert after you're sure the original
cert i
In case you decide to try again or for other oVirt users, who see similar
issues
When 'required' networks for the cluster aren't there, or shared storage is
unreachable, the host is put into non-operational state.
oVirt engine wants to handle the network configs for the hypervisors.
The requi
Somewhere along the way in oVirt old CPU support was dropped for windows
compatibility reasons.
Nehalems were dropped, and westmeres would report as nehalems and then fail
if AES wasn't turned on in the bios.
So this is a case of ERROR_CPU_TOO_OLD. (They're also power hungry
compared to modern CPU
u selected the option: --add-kernel-support on
> the script? I couldn't find the difference between enabling it or not.
>
> Thank you.
>
> On 5 Aug 2021, at 15:20, Vinícius Ferrão
> wrote:
>
> Hmmm. Running the mlnx_ofed_install.sh script is a pain. But I got your
> ide
I don't know if you can just remove the gluster-rdma rpm.
I'm using mlnx ofed on some 4.4 ovirt node hosts by installing it via the
mellanox tar/iso and
running the mellanox install script after adding the required dependencies
with --enable-repo,
which isn't the same as adding a repository and '
oVirt runs as user vdsm@ovirt so security settings prevent other users
including root from seeing it by default.
root should be able to see it as in readonly
virsh -r list
You can either run the command on hypervisor like this alias
alias virsh='virsh -c
qemu:///system?authfile=/etc/ovirt-host
I would see this if the engine VM storage was not accessible.
In my case a targetd iscsi server for the hosted storage wouldn't serve it
after rebooting from a power outage.
More specifically, the iscsi host had let LVM startup scan the device,
which prevented targetd from serving it.
On Mon, J
I could be mistaken, but I think the issue is that ovirt engine believes
the NFS domain is mandatory for all hypervisor hosts unless you put the
storage domain into 'maintenance' mode.
It will notice its down and start trying to fence the offending hypervisors
which in turn tries to migrate VMs to
I'm saw something similar on a test cluster on CentOS 8.3.
You can take it out of global maintenance mode by navigating the engine UI
to edit cluster -> scheduling policy and turn off global maintenance there.
Not sure what else is going on. It wants me to put all three hosts into
maintenance mod
For specific users local to the ovirt engine
https://ovirt.org/documentation/administration_guide/index.html#sect-Administering_User_Tasks_From_the_commandline
OK for an emergency admin user or perhaps external system user, but this
doesn't scale very well.
But generally you might want to setup L
With all the other VMs paused, I would guess all the VM disk image storage
is offline or unreachable
from the hypervisor.
login to this hypervisor host. df -kh to see whats mounted
check the fileserving from hosts there.
On Tue, May 18, 2021 at 4:33 PM Eugène Ngontang wrote:
> Hi,
>
> Our se
unix_chkpwd[14187]: check pass; user unknown
May 18 13:03:02 br014 unix_chkpwd[14187]: password check failed for user
(root)
for local user account >1000 UID
May 18 13:03:28 br014 unix_chkpwd[14309]: could not obtain user info
(e##)
On Tue, May 18, 2021 at 12:02 PM Edward Berger wr
/etc/pam.d/cockpit under node 4.4.6 is the same as you posted.
Something else changed.
#%PAM-1.0
# this MUST be first in the "auth" stack as it sets PAM_USER
# user_unknown is definitive, so die instead of ignore to avoid subsequent
modules mess up the error code
-auth [success=done new_autht
I see the same thing on a test cluster. I presumed it was due to using
sssd and kerberos for local user logins.
Here's an example of what gets written to /var/log/secure when root login
to cockpit fails.
May 18 11:29:07 br014 unix_chkpwd[26429]: check pass; user unknown
May 18 11:29:07 br014 unix_
4.3.10 is the latest version of oVirt 4.3, not sure why you'd want to try
installing an older version.
These types of installer errors are usually caused by either the device
'sdb' is smaller than you're trying to allocate
or
has some previous partition formatting on it (installer presumes a
non-f
You are looking at very old repos.
One normally does something like..
yum install http://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
yum install python-ovirt-engine-sdk4
On Tue, Apr 20, 2021 at 11:46 AM Miguel Garcia
wrote:
> BTW...
> Already followed instruction as described in
> h
oVirt node is a dedicated hypervisor imgbased installer distro for oVirt
that handles upgrades all at once with a rollback feature, whereas an
Enterprise Linux Host would do the same thing but you would partition your
disks however you saw fit and manually install RPMs to get the same
functionality
If you're content with the current RHEL licensing terms, its a great
choice for now.
Noone can tell you the future with certainty.
If you're just creating a virtualization cluster with external storage, why
not use the node-ng installer iso.
It will be updated as necessary for each new released v
> Question 1: In General, can i install oVirt Node and hosted-engine on
one physical server?
Yes and single node hyperconverged is suggested for single node deployments.
> Question 2: What domain names should I specify in the DNS server and
which ones IP should I reserve?
You can use any domain n
The upgrade selection would only upgrade you to the last version of 4.2 if
that is even out there for it to download.
4.2 has been end of life for a long time now.
You need to upgrade the engine itself to at least 4.3.10 to upgrade to 4.4,.
So much has changed internally with 4.2 to 4.3 to 4.4, (4
I have an oVirt 4.3 to 4.4 engine upgrade upcoming, so I decided to start
reading up on the procedure at
https://www.ovirt.org/documentation/upgrade_guide/#SHE_Upgrading_from_4-3
I'm stuck at "4.2 Analyzing the environment" which says to install some RPM
that isn't found
and the closest sounding "
The ovirt-node-ng-installer iso imbased installs will have the dependency
issues worked out when they move it to CentOS8 Stream.
Its the simplest way to go. Its going to be tested as a whole before you
touch it. It version locks a lot of things, but some packages
can be added afterwards with someth
I'm seeing a bug with disk type showing up as preallocated in VM portal for
a template disk, when it should be thin-provisioned.
All storage domains are NFS.
I setup a 4.4.5 RC engine with hosts both 4.4.5 RC and 4.4.4 on the admin
portal as 'admin@internal'
I created a small CentOS 7.9 VM on NFS d
I recall some earlier ovirt users list postings with similar issues with
the NAS.
One thought they solved it by changing the ovirt storage domain config NFS
version to a lower version instead of whatever it was defaulting to.
The other was blaming the NAS software ZFS underlying the NFS causing som
If the disk was previously used, you may need to 'wipefs -a /dev/sdb' to
clean out any previous partitioning, etc.
If the installer can't create the gluster PV, it is often because the drive
needs to be added to the multipath blacklist.
lsblk to find the ID and add it to the /etc/multipath.conf b
If the enginve VM is in a paused state, ssh into host where its paused and
try
virsh -c qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf
resume HostedEngine
On Tue, Feb 9, 2021 at 2:15 PM Ian Easter wrote:
> Hello Users,
>
> We have an oVirt (4.4) environment that had 2 hosts i
It seems to be failing on adding the ovirtmgmt bridge to the interface
defined on the host as part of the
host addition installation process. I had this issue during a
hosted-engine install when ovirtmgmt was on a tagged
port which was already configured with a chosen name not supported by the
ov
The hosted engine fqdn and IP address should be separate from the
hypervisor host's IP address but on the same network so it can communicate
to the host through a bridged interface that the installer creates on the
hypervisor host for the hosted engine VM.
On Mon, Dec 28, 2020 at 3:14 PM lejeczek
the hyperconverged gluster RPMs aren't installed by default on Enterprise
Linux with the cockpit RPM but are installed on the oVirt node-ng install.
The quick install neglects this, so look at
https://www.ovirt.org/documentation/installing_ovirt_as_a_self-hosted_engine_using_the_cockpit_web_interfa
roughly matching what will be shipped in upcoming RHEL
On Tue, Oct 20, 2020 at 4:54 AM Sandro Bonazzola
wrote:
>
>
> Il giorno lun 19 ott 2020 alle ore 19:03 Edward Berger <
> edwber...@gmail.com> ha scritto:
>
>> with those packages installed I was able to run the si
system-roles >= 1.0-19 needed by
ovirt-engine-metrics-1.4.2-1.el8.noarch
(try to add '--skip-broken' to skip uninstallable packages or '--nobest' to
use not only best candidate packages)
On Mon, Oct 19, 2020 at 12:25 PM Edward Berger wrote:
> I'm installing 4
I'm installing 4.4.3-pre on CentOS8.2 and it seems the glusterfs-server
and gluster-ansible-roles RPMs aren't installed,
with the ovirt-cockpit which pulls other dependencies.
This caused the cockpit hyperconverged installer to fail. It mentions the
roles rpm but not the glusterfs
with the role i
for installation time failures of a hosted engine system, look in
/var/log/ovirt-hosted-engine-setup
On Mon, Oct 12, 2020 at 7:50 AM wrote:
> What logs are required?
>
>
>
> Yours Sincerely,
>
>
>
> *Henni *
>
>
>
> *From:* Edward Berger
> *Sent
As an ovirt user my first reaction reading your message was
"that is a ridiculously small system to be trying ovirt self hosted engine
on."
My minimum recommendation is 48GB of RAM dual xeon, since the hosted
ovirt-engine installation by default
wants 16GB/4vCPU. I would use a basic KVM/virt-mana
For oVirt nodes upgrading from 4.4.0 to 4.4.2 you must remove the LVM
filter before the node upgrade produces a proper booting host.
Its in the upgrade release notes as a known issue.
https://www.ovirt.org/release/4.4.2/
On Fri, Oct 2, 2020 at 1:39 PM Erez Zarum wrote:
> Hey,
> Bunch of hosts
If its in an NFS folder, make sure the ownership is vdsm:kvm (36:36)
On Sat, Sep 26, 2020 at 2:57 PM matthew.st...@fujitsu.com <
matthew.st...@fujitsu.com> wrote:
> I’ve created and ISO storage domain, and placed ISO’s in the export path
> do not show up under Storage > Storage Domains > iso > i
I've had situations where the engine UI wouldn't update for
shutdown/startups of VMs which were resolved after ssh-ing into the engine
VM and running systemctl restart ovirt-engine.service. running
engine-setup was also used on occasion and cleared out old tasks.
On Fri, Sep 18, 2020 at 1:26 AM G
What is the CPU? I'm asking because you said it was old servers, and at
some point oVirt started filtering out old CPU types which were no longer
supported under windows. There was also the case where if a certain bios
option wasn't enabled (AES?) a westmere(supported) reported as an older
model
For others having issues with VM network routing...
virbr0 is usually installed by default on centos, etc. to facilitate
containers networking via NAT.
If I'm not planning on running any containers I usually yum remove the
associated packages,
and a reboot to make sure networking is OK.
Sometimes
It would seem your host can't resolve the ip address for the server FQDN
listed for the NFS server.
You should fix the DNS problem or try mounting via ip address once you're
sure that it is reachable from the server.
On Thu, Sep 10, 2020 at 12:33 PM wrote:
> MainProcess|jsonrpc/4::DEBUG::2020-0
he two issues are related, I did this after failing to solve the
> first issue.
>
> On 9/10/20 8:00 AM, Edward Berger wrote:
>
> It sounds like you don't have a proper default route on the VM or the
> netmask is set incorrectly,
> which can cause a bad route.
>
> L
It sounds like you don't have a proper default route on the VM or the
netmask is set incorrectly,
which can cause a bad route.
Look at differences between the engine's network config (presuming it can
reach outside the hypervisor host)
and the VM's config. The VM should have the same subnet, net
If I'm not mistaken manageiq is the suggested solution to manage multiple
ovirt clusters with their own engines.
On Tue, Aug 4, 2020 at 2:45 PM Holger Petrick
wrote:
> Hello,
>
> I'm looking to deploy oVirt for a company which has locations in different
> countries.
> As I know and also set up i
Yes. You can add compute only nodes to a hyperconverged cluster to use the
same storage.
On Tue, Aug 4, 2020 at 7:02 AM Benedetto Vassallo <
benedetto.vassa...@unipa.it> wrote:
> Hi all,
> I am planning to build a 3 nodes hyperconverged system with oVirt, but I
> have a question.
> After having
4 to fail.
>
> I will try to think about a possible workaround.
> Can you please create a bug
> <https://bugzilla.redhat.com/enter_bug.cgi?product=vdsm>?
>
> Thank you.
> Best regards,
> Ales
>
> On Mon, Aug 3, 2020 at 10:58 AM Ales Musil wrote:
>
>&g
the 4.4.1 72310 node-ng iso has a broken installer. It failed to find my
regular ethernet interfaces and gave the no networks error.
The 4.4.2-rc seems to be working fine hosted-engine installer wise, to an
nfs mount for the engine VM storage. I didn't try the gluster wizard.
On Thu, Jul 30, 202
cockpit hosted-engine deploy fails after defining VM name with static
address with similar python2 error.
[image: engine-deploy-fail.JPG]
On Fri, Jul 17, 2020 at 6:44 AM Gobinda Das wrote:
> Hi Gianluca,
> Thanks for opening the bug.
> Adding @Prajith Kesava Prasad to look into it.
>
>
> On Fr
Same issue with ovirt-node-ng-installer 4.4.1-2020071311.el8 iso
[image: gluster-fail.PNG]
On Thu, Jul 16, 2020 at 9:33 AM wrote:
> I also have this message with the deployment of Gluster. I tried the
> modifications and it doesn't seem to work. Did you succeed ?
>
> here error :
>
> TASK [glust
I had an issue like this where I was using a centos7 (targetcli iscsi)
server which accidentally had LVM enabled upon reboot
which grabbed the RAID device and stopped targetcli from exporting the RAID
disk as iscsi.
It only showed up after a power outage and it took me a while to figure out
what ha
While ovirt can do what you would like it to do concerning a single user
interface, but with what you listed,
you're probably better off with just plain KVM/qemu and using virt-manager
for the interface.
Those memory/cpu requirements you listed are really tiny and I wouldn't
recommend even trying
I haven't tried many but for one I just untarred the thing to get the disk
image file and created a new VM with that.
Sometimes the ova file is compressed/assembled in some way that might not
be compatible.
On Fri, Feb 28, 2020 at 1:12 AM Jayme wrote:
> If the problem is with the upload process
For number 2, I'd look at the actual gluster file directories, in which I'd
expect to see host3 is missing the files.
I'd rsync the files from one of the other hosts to the same location on
host3 and then run the "gluster heal volume engine".
Since its the engine volume, I wouldn't be surprised if
Check the actual disk image storage for ownership/permissions issues.
A while back there was a bug that caused VM disk images to be changed from
vdsm:kvm ownership to root.
On Fri, Feb 7, 2020 at 3:26 PM Crazy Ayansh
wrote:
> There are 6 VM on the server and all were working fine before I reb
I thought that was it.
I remembered some experience I had with a test install that recommended
turning the network filter off.
You probably already did this, but when you turn off filtering or make
other changes
to the logical network like MTU size you must completely shutdown the
attached VMs an
Does your network vnic profile have filtering disabled?
There are options to do that in the drop-down menu "Network Filter".
[image: ovirt-network-filter.png]
On Fri, Nov 1, 2019 at 7:35 AM wrote:
> Hi @hotomoc,
>
> Nothing, it seems there are some filter or like this.
> Maybe some expert could
log into the ovirt node, and "yum update ovirt-node-ng-image-update" and
reboot.
On Wed, Oct 9, 2019 at 4:51 AM Sven Achtelik wrote:
> Hi All,
>
>
>
> is there a way to go from 4.2.8 on ovirt node to go directly to the latest
> version, without reinstalling the node from the iso file ? I wasn’t
Current oVirt node-ng uses 6
]# yum list installed |grep gluster
gluster-ansible-cluster.noarch1.0.0-1.el7
installed
gluster-ansible-features.noarch 1.0.5-3.el7
installed
gluster-ansible-infra.noarch 1.0.4-3.el7
installed
gluster-ansible-maintenanc
vdsm creates persistant network configs that overwrite manual changes at
reboot
in /var/lib/vdsm/persistence/netconf
You can check your other hosts for any differences there.
It is recommended that networks are set up and managed through ovirt engine.
On Sun, Sep 22, 2019 at 6:01 AM TomK wrote:
You can change that anytime.
On the engine GUI, set the host to maintenance,
The select the "Installation/Reinstall" menu item.
Select the "Hosted Engine" tab, and then pick "DEPLOY"
[image: host-deploy.JPG]
On Wed, Sep 18, 2019 at 6:14 AM wrote:
> I tryed to add a new host to my oVirt environme
I had a similar issue, my LDAP guy said oVirt engine was asking for
uidObject which our ldap didn't provide and
gave me this config addition to make to the
/etc/ovirt-engine/aaa/MY.DOMAIN.properties file so it would
use inetOrgPerson instead
# override default ldap filter. defaults found at
#
ht
on oVirt node ng, repos are usually disabled, so you need to enable the
repo to install other RPMs.
yum --enablerepo=base install dump xfsdump
yum --enablerepo=base install pam_krb5
On Wed, Aug 14, 2019 at 8:53 PM wrote:
> Hello all,
>
> Hope everyone is doing well! I would simply like to as
If you installed your hypervisor hosts from the Node installer iso, updates
are done by
yum update ovirt-node-ng-image-update, and then reboot the host.
On the hosted-engine VM you run the yum update the "release" and *setup*
as you were trying on the node.
On Sun, Aug 11, 2019 at 8:41 PM Cole
Maybe there is something already on the disk from before?
gluster setup wants it completely blank, no detectable filesystem, no raid,
etc.
see what is there with fdisk.-l see what PVs exist with pvs.
Manually wipe, reboot and try again?
On Fri, Jun 28, 2019 at 5:37 AM wrote:
> I have added the
You might be hitting a multipath config issue.
On a 4.3.4. node-ng cluster I had a similar problem. spare disk was on
/dev/sda (boot disk was /dev/sdb)
I found this link
https://stackoverflow.com/questions/45889799/pvcreate-failing-to-create-pv-device-not-found-dev-sdxy-or-ignored-by-filteri
w
I'll presume you didn't fully backup your hosts root file systems on the
host which was fried.
It may be easier to replace with a new hostname/IP.
I would focus on the gluster config first, since it was hyperconverged.
I don't know which way engine UI is using to detect gluster mount on
missing h
When I read your intro, and I hit the memory figure, I was saying to
myself, what
I'd definitely increase the memory if possible. As high as you can
affordably fit into the servers.
Engine asks 16GB at installation time, add some for gluster services and
you're at your limits before you add
> TASK [openshift_control_plane : Wait for control plane pods to appear]
*
> Monday 27 May 2019 13:31:54 + (0:00:00.180) 0:14:33.857
> FAILED - RETRYING: Wait for control plane pods to appear (60 retries
left).
> FAILED - RETRYING: Wait for control plane pods to appe
step before deployment and adding
> gluster_features_force_varlogsizecheck:
> false under the vars section of the file.
>
> Regards
> Parth Dhanjal
>
> On Fri, May 10, 2019 at 5:58 AM Edward Berger wrote:
>
>> I'm trying to bring up a single node hyperconverged with the current
&
I'm trying to bring up a single node hyperconverged with the current
node-ng ISO installation,
but it ends with this failure message.
TASK [gluster.features/roles/gluster_hci : Check if /var/log has enough
disk space] ***
fatal: [br014.bridges.psc.edu]: FAILED! => {"changed": true, "cmd": "df -m
/
(MainThread) INFO 2019-04-25 16:25:55,755 image_proxy:43:root:(main) Server
shut down, exiting
(MainThread) INFO 2019-04-25 16:36:24,874 server:45:server:(start) Starting
(pid=56989, version=1.5.1)
(MainThread) INFO 2019-04-25 16:36:24,879 image_proxy:34:root:(main) Server
started, successfully noti
Previously I had issues with the upgrades to 4.3.3 failing because of
"stale" image transfer data, so I removed it from the database using the
info given here on the mailing list and was able to complete the oVirt node
and engine upgrades.
Now I have a new problem. I can't upload a disk image any
Thanks! That fixed the problem. engine-setup was able to complete.
On Wed, Apr 17, 2019 at 3:48 AM Yedidyah Bar David wrote:
> On Wed, Apr 17, 2019 at 10:30 AM Lucie Leistnerova
> wrote:
> >
> > Hi Edward,
> >
> > On 4/16/19 9:23 PM, Edward Berger wrote:
>
When trying to put a node 4.3.0 into maintenance, I get the following error:
--
Error while executing action: Cannot switch Host winterfell.psc.edu to
Maintenance mode. Image transfer is in progress for the following (2) disks:
8d130846-bd84-46b0-9a45-b6a2ecf66865,
35fd6f8f-65f5-49e7-ae5a-9b10c5c0
That is completely normal if you didn't download and install the CA
certificate from your ovirt engine GUI.
There's a download link for it on the page before you login?
On Mon, Mar 18, 2019 at 5:01 PM wrote:
> Hi,
>
> I tried to create windows2012 vm on nfs data domain, but the disk was
> locked
OK, if the icon is there that is a good thing. There would be no icon if
you didn't select deploy.
Its not terribly obvious when first installing a second host that it needs
the deploy part set.
There's something else causing the engine migration to fail. You can dig
through the logs on the eng
If you haven't "installed" or "reinstalled" the second host without
purposely selecting "DEPLOY" under hosted-engine actions,
it will not be able to run the hosted-engine VM.
A quick way to tell if you did is to look at the hosts view and look for
the "crowns" on the left like this attached pic exa
We are attempting to get vGPU-enabled guests working with our oVirt 4.3.0
configuration, but have run into problems.
We are running:
NVidia License Server version 2018.09
NVidia License Client Manager 2018.10.0.25098346
and that license info is correctly retrieved by the clients.
With a Cen
If its a node-ng install, you should just update the whole image with
yum update ovirt-node-ng-image-update
On Wed, Feb 13, 2019 at 8:12 PM Vincent Royer wrote:
> Sorry, this is a node install w/ he.
>
> On Wed, Feb 13, 2019, 4:44 PM Vincent Royer
>> trying to update from 4.2.6 to 4.2.8
>>
>> y
I don't believe the wizard followed your wishes if it comes up with 1005gb
for the thinpool.
500gb data + 500gb vmstore +5gb metadata = 1005gb
The wizard tries to do the same setup on all three gluster hosts.
So if you change anything, you have to "check and edit" the config file it
generates in a
ls -l /rhev/data-center/mnt/glusterSD/*/*/images/*
# any under "engine volume" that are owned by root chown and chmod as
vdsm:kvm 660
# then engine should be able to start.
On Tue, Feb 12, 2019 at 4:43 PM Endre Karlson
wrote:
> I Also tried to run
> service vdsmd stop
> vdsm-tool configure --f
A coworker and I are trying to bring up some nvidia vGPU VMs on ovirt 4.3,
but are experiencing issues with the remote consoles for both Windows and
Linux.
A windows10 VM for example works OK with a windows remote desktop client
after enabling the service inside the VM, but using the oVirt VM port
:
> On Thu, Feb 7, 2019 at 8:57 AM Edward Berger wrote:
> >
> > I'm seeing migration failures for the hosted-engine VM from a 4.28 node
> to a 4.30 node so I can complete the node upgrades.
>
> You may be running into
> https://bugzilla.redhat.com/show_bug.c
on that you mentioned).
>
>
>
> On Fri, Feb 8, 2019 at 9:26 PM Edward Berger wrote:
>
>> I'm wondering what cluster cpu compatibility version the Ryzen 2700 would
>> go under?
>> It used to default to Opteron G3 when I tried it before, which is now
>> &qu
I'm wondering what cluster cpu compatibility version the Ryzen 2700 would
go under?
It used to default to Opteron G3 when I tried it before, which is now
"unsupported" as of ovirt 4.3.
CentOS 7 complains about "untested" CPU with Ryzen 2700 in my experience.
Maybe Fedora is better there.
Here are
Ok, I'll check.
4.2.8 nodes
libvirt.x86_644.5.0-10.el7_6.3
installed
4.3.0 upgraded nodes
libvirt.x86_644.5.0-10.el7_6.4
installed
On Thu, Feb 7, 2019 at 12:03 AM Sahina Bose wrote:
> On Thu, Feb 7, 2019 at 8:57 AM Edward Berger wrote:
>
I'm seeing migration failures for the hosted-engine VM from a 4.28 node to
a 4.30 node so I can complete the node upgrades.
In one case I tried to force an update on the last node and now have a
cluster where the hosted-engine VM fails to start properly. Sometimes
something thinks the VM is runn
I upgraded some nodes from 4.28 to 4.3 and now when I look at the cockpit
"services"
tab I see a red failure for Gluster Events Notifier and clicking through I
get these messages below.
14:00
glustereventsd.service failed.
systemd
14:00
Unit glustereventsd.service entered failed state.
systemd
14
Hi,
One of our projects wants to try offering VMs with nvidia vGPU.
My co-worker had some problems before, so I thought I'd try the latest 4.3
ovirt-node-ng.
In the "Edit Host" -> kernel dialog I see two promising checkbox options
Hostdev Passthrough & SR-IOV (which adds to kernel line intel_iomm
have noticed, who's the best to contact? I
>> haven't been able to get any response from devs on any of (the myriad) of
>> issues with the 4.2.8 image.
>> Also having a ton of strange issues with the hosted-engine vm deployment.
>>
>> On Mon, Feb 4, 201
Yes, I had that issue with an 4.2.8 installation.
I had to manually edit the "web-UI-generated" config to be anywhere close
to what I wanted.
I'll attach an edited config as an example.
On Mon, Feb 4, 2019 at 2:51 PM feral wrote:
> New install of ovirt-node 4.2 (from iso). Setup each node with
> On Tue, Jan 29, 2019 at 10:05 PM Edward Berger
> wrote:
> >
> > Done. It still won't let me remove the host.
> > clicked maintenance, checked ignore gluster... box.
> > clicked remove. got popup " track00.yard.psc.edu:
> >
> > Cannot remov
8, 2019 at 7:31 AM Edward Berger wrote:
> >
> > I have a problem host which also is the one I deployed a hyperconverged
> oVirt node-ng cluster from with the cockpit's hyperconverged installation
> wizard.
> >
> > When I realized after deploying that I hadn't set
-engine, and now it looks it seems to freeze when I try
to reinstall with engine deploy... It eventually times out with a failure.
On Sun, Jan 27, 2019 at 8:59 PM Edward Berger wrote:
> I have a problem host which also is the one I deployed a hyperconverged
> oVirt node-ng cluster from
I'm not sure where to send a request for including the current Aquantia 107
(10GbaseT nic) driver to be included in the ovirt-node-ng image. I don't
see a centos RPM for kmod-redhat-atlantic, apparently there's a scientific
linux rpm available for download.
e cpu
type.
On Fri, Aug 31, 2018 at 10:52 AM carl langlois
wrote:
> most of my cpu are
> Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz
>
> so yes it is newer..Not sure how to resolve this.
>
> On Tue, Aug 28, 2018 at 2:29 PM Edward Berger wrote:
>
>> Is your CPU really neh
Is your CPU really nehalem or is it something newer?
cat /proc/cpuinfo and google the model if you're not sure.
For my E5650 servers I found I had to go into the BIOS and enable some AES
instruction
before the oVirt node (actually libvirt) detected the CPU type as newer
than Nehalem.
On Tue, Aug
I saw something like this also.
The new ansible based installation tries to verify hostnames using the
getent command
with something like...
getent ahostsv4 hostname.domain | grep hostname.domain
In my case that failed to pickup a cname I wanted to use, even though
nslookup hostname.domain
ret
97 matches
Mail list logo