yum downgrade qemu-kvm-block-gluster-6.0.0-33.el8s 
libvirt-daemon-driver-qemu-6.0.0-33.el8s qemu-kvm-common-6.0.0-33.el8s 
qemu-kvm-hw-usbredir-6.0.0-33.el8s qemu-kvm-ui-opengl-6.0.0-33.el8s 
qemu-kvm-block-rbd-6.0.0-33.el8s qemu-img-6.0.0-33.el8s qemu-kvm-6.0.0-33.el8s 
qemu-kvm-block-curl-6.0.0-33.el8s qemu-kvm-block-ssh-6.0.0-33.el8s 
qemu-kvm-ui-spice-6.0.0-33.el8s ipxe-roms-qemu-6.0.0-33.el8s 
qemu-kvm-core-6.0.0-33.el8s qemu-kvm-docs-6.0.0-33.el8s 
qemu-kvm-block-6.0.0-33.el8s
Best Regards,Strahil Nikolov 
 
  On Sun, Jan 23, 2022 at 22:47, Robert Tongue<phuny...@neverserio.us> wrote:   
#yiv7072323153 P {margin-top:0;margin-bottom:0;}Ahh, I did some repoquery 
commands can see a good bit of qemu* packages are coming from appstream rather 
than ovirt-4.4-centos-stream-advanced-virtualization.
What's the recommanded fix?From: Strahil Nikolov <hunter86...@yahoo.com>
Sent: Sunday, January 23, 2022 3:41 PM
To: users <users@ovirt.org>; Robert Tongue <phuny...@neverserio.us>
Subject: Re: [ovirt-users] Failed HostedEngine Deployment I've seen this.

Ensure that all qemu-related packages are coming from 
centos-advanced-virtualization repo (6.0.0-33.el8s.x86_64).
There is a known issue with the latest packages in the CentOS Stream.

Also, you can set the following alias on the Hypervisours:
alias virsh='virsh -c 
qemu:///system?authfile=/etc/ovirt-hosted-engine/virsh_auth.conf'


Best Regards,
Strahil Nikolov
В неделя, 23 януари 2022 г., 21:14:20 Гринуич+2, Robert Tongue 
<phuny...@neverserio.us> написа:

<!--#yiv7072323153 #yiv7072323153x_yiv4464233184 p 
{margin-top:0;margin-bottom:0;}-->Greetings oVirt people,
I am having a problem with the hosted-engine deployment, and unfortunately 
after a weekend spent trying to get this far, I am finally stuck, and cannot 
figure out how to fix this.
I am starting with 1 host, and will have 4 when this is finished.  Storage is 
GlusterFS, hyperconverged, but I am managing that myself outside of oVirt. It's 
a single-node GlusterFS volume, which I will expand out across the other 4 
nodes as well.  I get all the way through the initial hosted-engine deployment 
(via the cockpit interface) pre-storage, then get most of the way through the 
storage portion of it.  It fails at starting the HostedEngine VM in its final 
state after copying the VM disk to shared storage.
This is where it gets weird.
[ ERROR ] fatal: [localhost]: FAILED! => {"changed": false, "msg": "Engine VM 
IP address is while the engine's he_fqdn ovirt.deleted.domain resolves to 
192.168.x.x. If you are using DHCP, check your DHCP reservation configuration"}
I've masked out the domain and IP for obvious reasons.  However I think this 
deployment error isn't really the reason for the failure, it's just where it is 
at when it fails.  The HostedEngine VM is starting, but not actually booting.   
I was able to change the VNC password with `hosted-engine 
--add-console-password`, and see the local console display with that, however 
it just displays "The guest has not initialized the display (yet)".
I also did:
# hosted-engine --consoleThe engine VM is running on this hostEscape character 
is ^]
Yet that doesn't move any further, nor allow any input.  The VM does not 
respond on the network.  I am thinking it's just not making it to the initial 
BIOS screen and booting at all.  What would cause that? 
Here is the glusterfs volume for clarity.
# gluster volume info storage Volume Name: storageType: DistributeVolume ID: 
e9544310-8890-43e3-b49c-6e8c7472dbbbStatus: StartedSnapshot Count: 0Number of 
Bricks: 1Transport-type: tcpBricks:Brick1: 
node1:/var/glusterfs/storage/1Options Reconfigured:storage.owner-gid: 
36storage.owner-uid: 36network.ping-timeout: 5performance.client-io-threads: 
onserver.event-threads: 4client.event-threads: 4cluster.choose-local: 
offuser.cifs: offfeatures.shard: oncluster.shd-wait-qlength: 
1024cluster.locking-scheme: fullcluster.data-self-heal-algorithm: 
fullcluster.server-quorum-type: servercluster.quorum-type: 
autocluster.eager-lock: enableperformance.strict-o-direct: 
onnetwork.remote-dio: disableperformance.low-prio-threads: 
32performance.io-cache: offperformance.read-ahead: offperformance.quick-read: 
offstorage.fips-mode-rchecksum: ontransport.address-family: inetnfs.disable: on
# cat /proc/cpuinfoprocessor : 0vendor_id : GenuineIntelcpu family : 6model : 
58model name : Intel(R) Xeon(R) CPU E3-1280 V2 @ 3.60GHzstepping : 9microcode : 
0x21cpu MHz : 4000.000cache size : 8192 KBphysical id : 0siblings : 8core id : 
0cpu cores : 4apicid : 0initial apicid : 0fpu : yesfpu_exception : yescpuid 
level : 13wp : yesflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology 
nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est 
tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer 
xsave avx f16c rdrand lahf_lm cpuid_fault epb pti ssbd ibrs ibpb stibp 
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms xsaveopt dtherm ida 
arat pln pts md_clear flush_l1dbugs : cpu_meltdown spectre_v1 spectre_v2 
spec_store_bypass l1tf mds swapgs itlb_multihit srbdsbogomips : 7199.86clflush 
size : 64cache_alignment: 64address sizes : 36 bits physical, 48 bits 
virtualpower management:
[ plus 7 more ]


Thanks for any insight that can be provided.
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/JZQYGXQP5DO4HJSLONTBNMPQ5YUX54MX/
  
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/CNXEMJHWQMH2VE4G6VNTYUDZEYN5NF6F/

Reply via email to