[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-27 Thread Sverker Abrahamsson via Users
We ran in to this issue as well when trying to install Ovirt Hyperconverged. The root issue is that kmod-kvdo in Centos 8 (and probably upstream) is built for a specific kernel and if you don't run that kernel it is not found. This is a major issue even if you match the kernel version then if

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Strahil Nikolov via Users
Can you share both ovirt and gluster logs ? Best Regards, Strahil Nikolov В четвъртък, 14 януари 2021 г., 20:18:03 Гринуич+2, Charles Lam написа: Thank you Strahil.  I have installed/updated: dnf install --enablerepo="baseos" --enablerepo="appstream" --enablerepo="extras" --enabler

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Dear Friends, Resolved! Gluster just deployed for me successfully. Turns out it was two typos in my /etc/hosts file. Why or how ping resolved properly and worked I am not sure. Special thanks to Ritesh and most especially Strahil Nikolov for their assistance in resolving other issues along

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Thank you Strahil. I have installed/updated: dnf install --enablerepo="baseos" --enablerepo="appstream" --enablerepo="extras" --enablerepo="ha" --enablerepo="plus" centos-release-gluster8.noarch centos-release-storage-common.noarch dnf upgrade --enablerepo="baseos" --enablerepo="appstream" --

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-13 Thread Strahil Nikolov via Users
As those are brand new, try to install the gluster v8 repo and update the nodes to 8.3 and then rerun the deployment: yum install centos-release-gluster8.noarch yum update Best Regards, Strahil Nikolov В 23:37 + на 13.01.2021 (ср), Charles Lam написа: > Dear Friends: > > I am still stuck

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-13 Thread Charles Lam
Dear Friends: I am still stuck at task path: /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67 "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals", "Volume heal failed." I refined /e

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-12 Thread Charles Lam
I will check ‘/var/log/gluster’. I had commented out the filter in ‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails because lvm drives are excluded by filter. Should I not be commenting it out but modifying it in some way? Thanks! Charles On Tue, Jan 12, 2021 at 12:11 AM St

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Strahil Nikolov via Users
> I tried Gluster deployment after cleaning within the Cockpit web > console, using the suggested ansible-playbook and fresh image with > oVirt Node v4.4 ISO. Ping from each host to the other two works for > both mgmt and storage networks. I am using DHCP for management > network, hosts file for

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
Hi Ritesh, Yes, I have tried Gluster deployment several times. I was able to resolve the kvdo not installed issue, but no matter what I have tried to date recently I cannot get Gluster to deploy. I had a hyperconverged oVirt cluster/Gluster with VDO successfully running on this hardware and swit

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Ritesh Chikatwar
On Tue, Jan 12, 2021, 2:04 AM Charles Lam wrote: > Dear Strahil and Ritesh, > > Thank you both. I am back where I started with: > > "One or more bricks could be down. Please execute the command again after > bringing all bricks online and finishing any pending heals\nVolume heal > failed.", "std

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
Dear Strahil and Ritesh, Thank you both. I am back where I started with: "One or more bricks could be down. Please execute the command again after bringing all bricks online and finishing any pending heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be down. Please execut

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Ritesh Chikatwar
Hello, Can you try once cleaning the gluster deployment you can do this by running this command on one of the hosts. ansible-playbook /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_cleanup.yml -i /etc/ansible/hc_wizard_inventory.yml And then rerun the Ansible fl

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Strahil Nikolov via Users
What is the output of 'rpm -qa | grep vdo' ? Most probably the ansible flow is not deploying kvdo , but it's necessary at a later stage.Try to overcome via "yum search kvdo" and then 'yum install kmod-kvdo" (replace kmod-kvdo with the package for EL8). Also, I think that you can open a github is

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Charles Lam
Dear Strahil, I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512 for all nvme drives being used as bricks. Name resolution via /etc/hosts for direct connect storage network works for all hosts to all hosts. I am still blocked by the same "vdo: ERROR - Kernel modul

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Strahil Nikolov via Users
I'm also using direct cabling ,so I doubt that is the problem. Starting fresh is wise, but keep in mind: - wipe your bricks before installing gluster - check with 'lsblk -t' the PHY-SEC. If it's not 512, use vdo with " --emulate512" flag - Ensure that name resolution is working and each node can r

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Still not able to deploy Gluster on oVirt Node Hyperconverged - same error; upgraded to v4.4.4 and "kvdo not installed" Tried suggestion and per https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table I also tried "gluster volume h

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Strahil Nikolov via Users
I see that the selfheal daemon is not running. Just try the following from host1: systemctl stop glusterd; sleep 5; systemctl start glusterd for i in $(gluster volume list); do gluster volume set $i  cluster.granular-entry-heal enable ; done And then rerun the Ansible flow. Best Regards, Strahil

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Thanks so very much Strahil for your continued assistance! [root@fmov1n1 conf.d]# gluster pool list UUIDHostnameState 16e921fb-99d3-4a2e-81e6-ba095dbc14cahost2.fqdn.tld Connected d4488961-c854-449a-a211-1593810df52fhost3.fqdn.tld Conn

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-19 Thread Strahil Nikolov via Users
You will need to check the gluster volumes status. Can you provide output of the following (from 1 node ): gluster pool list gluster volume list for i in $(gluster volume list); do gluster volume status $i ; gluster volume info $i; echo "##