We ran in to this issue as well when trying to install Ovirt Hyperconverged.
The root issue is that kmod-kvdo in Centos 8 (and probably upstream) is
built for a specific kernel and if you don't run that kernel it is not
found. This is a major issue even if you match the kernel version then
if
Can you share both ovirt and gluster logs ?
Best Regards,
Strahil Nikolov
В четвъртък, 14 януари 2021 г., 20:18:03 Гринуич+2, Charles Lam
написа:
Thank you Strahil. I have installed/updated:
dnf install --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enabler
Dear Friends,
Resolved! Gluster just deployed for me successfully. Turns out it was two
typos in my /etc/hosts file. Why or how ping resolved properly and worked I am
not sure.
Special thanks to Ritesh and most especially Strahil Nikolov for their
assistance in resolving other issues along
Thank you Strahil. I have installed/updated:
dnf install --enablerepo="baseos" --enablerepo="appstream"
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"
centos-release-gluster8.noarch centos-release-storage-common.noarch
dnf upgrade --enablerepo="baseos" --enablerepo="appstream"
--
As those are brand new,
try to install the gluster v8 repo and update the nodes to 8.3 and
then rerun the deployment:
yum install centos-release-gluster8.noarch
yum update
Best Regards,
Strahil Nikolov
В 23:37 + на 13.01.2021 (ср), Charles Lam написа:
> Dear Friends:
>
> I am still stuck
Dear Friends:
I am still stuck at
task path:
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
"One or more bricks could be down. Please execute the command again after
bringing all bricks online and finishing any pending heals", "Volume heal
failed."
I refined /e
I will check ‘/var/log/gluster’. I had commented out the filter in
‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails
because lvm drives are excluded by filter. Should I not be commenting it
out but modifying it in some way?
Thanks!
Charles
On Tue, Jan 12, 2021 at 12:11 AM St
> I tried Gluster deployment after cleaning within the Cockpit web
> console, using the suggested ansible-playbook and fresh image with
> oVirt Node v4.4 ISO. Ping from each host to the other two works for
> both mgmt and storage networks. I am using DHCP for management
> network, hosts file for
Hi Ritesh,
Yes, I have tried Gluster deployment several times. I was able to resolve
the kvdo not installed issue, but no matter what I have tried to date
recently I cannot get Gluster to deploy. I had a hyperconverged oVirt
cluster/Gluster with VDO successfully running on this hardware and swit
On Tue, Jan 12, 2021, 2:04 AM Charles Lam wrote:
> Dear Strahil and Ritesh,
>
> Thank you both. I am back where I started with:
>
> "One or more bricks could be down. Please execute the command again after
> bringing all bricks online and finishing any pending heals\nVolume heal
> failed.", "std
Dear Strahil and Ritesh,
Thank you both. I am back where I started with:
"One or more bricks could be down. Please execute the command again after
bringing all bricks online and finishing any pending heals\nVolume heal
failed.", "stdout_lines": ["One or more bricks could be down. Please execut
Hello,
Can you try once cleaning the gluster deployment you can do this by running
this command on one of the hosts.
ansible-playbook
/etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_cleanup.yml
-i /etc/ansible/hc_wizard_inventory.yml
And then rerun the Ansible fl
What is the output of 'rpm -qa | grep vdo' ?
Most probably the ansible flow is not deploying kvdo , but it's necessary at a
later stage.Try to overcome via "yum search kvdo" and then 'yum install
kmod-kvdo" (replace kmod-kvdo with the package for EL8).
Also, I think that you can open a github is
Dear Strahil,
I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512
for all nvme drives being used as bricks. Name resolution via /etc/hosts for
direct connect storage network works for all hosts to all hosts. I am still
blocked by the same
"vdo: ERROR - Kernel modul
I'm also using direct cabling ,so I doubt that is the problem.
Starting fresh is wise, but keep in mind:
- wipe your bricks before installing gluster
- check with 'lsblk -t' the PHY-SEC. If it's not 512, use vdo with "
--emulate512" flag
- Ensure that name resolution is working and each node can r
Still not able to deploy Gluster on oVirt Node Hyperconverged - same error;
upgraded to v4.4.4 and "kvdo not installed"
Tried suggestion and per
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table
I also tried "gluster volume h
I see that the selfheal daemon is not running.
Just try the following from host1:
systemctl stop glusterd; sleep 5; systemctl start glusterd
for i in $(gluster volume list); do gluster volume set $i
cluster.granular-entry-heal enable ; done
And then rerun the Ansible flow.
Best Regards,
Strahil
Thanks so very much Strahil for your continued assistance!
[root@fmov1n1 conf.d]# gluster pool list
UUIDHostnameState
16e921fb-99d3-4a2e-81e6-ba095dbc14cahost2.fqdn.tld Connected
d4488961-c854-449a-a211-1593810df52fhost3.fqdn.tld Conn
You will need to check the gluster volumes status.
Can you provide output of the following (from 1 node ):
gluster pool list
gluster volume list
for i in $(gluster volume list); do gluster volume status $i ; gluster volume
info $i; echo
"##
19 matches
Mail list logo