[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Dear Friends,

Resolved!  Gluster just deployed for me successfully.  Turns out it was two 
typos in my /etc/hosts file.  Why or how ping resolved properly and worked I am 
not sure.

Special thanks to Ritesh and most especially Strahil Nikolov for their 
assistance in resolving other issues along the way.

Gratefully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/QQ75C24RXZNB5PNUIR7BR72DSG2JQAJ7/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-14 Thread Charles Lam
Thank you Strahil.  I have installed/updated:

dnf install --enablerepo="baseos" --enablerepo="appstream" 
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus" 
centos-release-gluster8.noarch centos-release-storage-common.noarch

dnf upgrade --enablerepo="baseos" --enablerepo="appstream" 
--enablerepo="extras" --enablerepo="ha" --enablerepo="plus"

Cleaned and re-ran Ansible.  Still receiving the same (below).  As always, if 
you or anyone else has any ideas for troubleshooting - 

Gratefully,
Charles

TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "engine", 
"granular-entry-heal", "enable"], "delta": "0:00:10.100254", "end": "2021-01-14 
18:07:16.192067", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return 
code", "rc": 107, "start": "2021-01-14 18:07:06.091813", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "data", 
"granular-entry-heal", "enable"], "delta": "0:00:10.103147", "end": "2021-01-14 
18:07:31.431419", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": 
"2021-01-14 18:07:21.328272", "stderr": "", "stderr_lines": [], "stdout": "One 
or more bricks could be down. Please execute the command again after bringing 
all bricks online and finishing any pending heals\nVolume heal failed.", 
"stdout_lines": ["One or more bricks could be down. Please execute the command 
again after bringing all bricks online and finishing any pending heals", 
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", 
"granular-entry-heal", "enable"], "delta": "0:00:10.102582", "end": "2021-01-14 
18:07:46.612788", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero 
return code", "rc": 107, "start": "2021-01-14 18:07:36.510206", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/BDLFPRPYPAY3UH2R4PVFL5XG4IKOERYP/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-13 Thread Charles Lam
Dear Friends:

I am still stuck at

task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
"One or more bricks could be down. Please execute the command again after 
bringing all bricks online and finishing any pending heals", "Volume heal 
failed."

I refined /etc/lvm/lvm.conf to:

filter = 
["a|^/dev/disk/by-id/lvm-pv-uuid-F1kxJk-F1wV-QqOR-Tbb1-Pefh-4vod-IVYaz6$|", 
"a|^/dev/nvme.n1|", "a|^/dev/dm-1.|", "r|.*|"]

and have also rebuilt the servers again.  The output of gluster volume status 
shows bricks up but no ports for self-heal daemon:

[root@fmov1n2 ~]# gluster volume status data
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick host1.company.com:/gluster_bricks
/data/data  49153 0  Y   244103
Brick host2.company.com:/gluster_bricks
/data/data  49155 0  Y   226082
Brick host3.company.com:/gluster_bricks
/data/data  49155 0  Y   225948
Self-heal Daemon on localhost   N/A   N/AY   224255
Self-heal Daemon on host2.company.com   N/A   N/AY   233992
Self-heal Daemon on host3.company.com   N/A   N/AY   224245

Task Status of Volume data
--
There are no active volume tasks

The output of gluster volume heal  info shows connected to the local 
self-heal daemon but transport endpoint is not connected to the two remote 
daemons.  This is the same for all three hosts.

I have followed the solutions here: https://access.redhat.com/solutions/5089741
and also here: https://access.redhat.com/solutions/3237651

with no success.

I have changed to a different DNS/DHCP server and still have the same issues.  
Could this somehow be related to the direct cabling for my storage/Gluster 
network (no switch)?  /etc/nsswitch.conf is set to file dns and pings all work, 
but dig and does not for storage (I understand this is to be expected).

Again, as always, any pointers or wisdom is greatly appreciated.  I am out of 
ideas.

Thank you!
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OE7EUSWMBTRINHCSBQAXCI6L25K6D2OY/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-12 Thread Charles Lam
I will check ‘/var/log/gluster’.  I had commented out the filter in
‘/etc/lvm/lvm.conf’ - if I don’t the creation of volume groups fails
because lvm drives are excluded by filter.  Should I not be commenting it
out but modifying it in some way?

 Thanks!
Charles

On Tue, Jan 12, 2021 at 12:11 AM Strahil Nikolov 
wrote:

>
> I tried Gluster deployment after cleaning within the Cockpit web console,
> using the suggested ansible-playbook and fresh image with oVirt Node v4.4
> ISO.  Ping from each host to the other two works for both mgmt and storage
> networks.  I am using DHCP for management network, hosts file for direct
> connect storage network.
>
> I've tested the command on a test Gluster 8.3 cluster and it passes.Have
> you checked the gluster logs in '/var/log/gluster' ? I know that there is
> LVM filtering on oVirt 4.4 enabled, so can you take a look in the lvm conf
> : grep -Ev "^$|#" /etc/lvm/lvm.conf  | grep filter
> VDO is using lvm.conf too, so it could cause strange issues. What happens
> when the deployment fails and you rerun (ansible should be idempotent) ?
>
>
> Best Regards,
> Strahil Nikolov
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/SWCHOALPCYKTCUOMHP4HKNSP5OZ63X63/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
Hi Ritesh,

Yes, I have tried Gluster deployment several times.  I was able to resolve
the kvdo not installed issue, but no matter what I have tried to date
recently I cannot get Gluster to deploy.  I had a hyperconverged oVirt
cluster/Gluster with VDO successfully running on this hardware and switches
before.  What I have changed since then was switching the storage to direct
connect and now installing with oVirt v4.4.  I was last successful with
oVirt v4.2.

I tried Gluster deployment after cleaning within the Cockpit web console,
using the suggested ansible-playbook and fresh image with oVirt Node v4.4
ISO.  Ping from each host to the other two works for both mgmt and storage
networks.  I am using DHCP for management network, hosts file for direct
connect storage network.

Thanks again for your help,
Charles

On Mon, Jan 11, 2021 at 10:03 PM Ritesh Chikatwar 
wrote:

>
>
> On Tue, Jan 12, 2021, 2:04 AM Charles Lam  wrote:
>
>> Dear Strahil and Ritesh,
>>
>> Thank you both.  I am back where I started with:
>>
>> "One or more bricks could be down. Please execute the command again after
>> bringing all bricks online and finishing any pending heals\nVolume heal
>> failed.", "stdout_lines": ["One or more bricks could be down. Please
>> execute the command again after bringing all bricks online and finishing
>> any pending heals", "Volume heal failed."]
>>
>> Regarding my most recent issue:
>>
>> "vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR - modprobe:
>> FATAL: Module
>> kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"
>>
>> Per Strahil's note, I checked for kvdo:
>>
>> [r...@host1.tld.com conf.d]# rpm -qa | grep vdo
>> libblockdev-vdo-2.24-1.el8.x86_64
>> vdo-6.2.3.114-14.el8.x86_64
>> kmod-kvdo-6.2.2.117-65.el8.x86_64
>> [r...@host1.tld.com conf.d]#
>>
>> [r...@host2.tld.com conf.d]# rpm -qa | grep vdo
>> libblockdev-vdo-2.24-1.el8.x86_64
>> vdo-6.2.3.114-14.el8.x86_64
>> kmod-kvdo-6.2.2.117-65.el8.x86_64
>> [r...@host2.tld.com conf.d]#
>>
>> [r...@host3.tld.com ~]# rpm -qa | grep vdo
>> libblockdev-vdo-2.24-1.el8.x86_64
>> vdo-6.2.3.114-14.el8.x86_64
>> kmod-kvdo-6.2.2.117-65.el8.x86_64
>> [r...@host3.tld.com ~]#
>>
>> I found
>> https://unix.stackexchange.com/questions/624011/problem-on-centos-8-with-creating-vdo-kernel-module-kvdo-not-installed
>> which pointed to https://bugs.centos.org/view.php?id=17928.  As
>> suggested on the CentOS bug tracker I attempted to manually install
>>
>> vdo-support-6.2.4.14-14.el8.x86_64
>> vdo-6.2.4.14-14.el8.x86_64
>> kmod-kvdo-6.2.3.91-73.el8.x86_64
>>
>> but there was a dependency that kernel-core be greater than what I was
>> installed, so I manually upgraded kernel-core to
>> kernel-core-4.18.0-259.el8.x86_64.rpm then upgraded vdo and kmod-kvdo to
>>
>> vdo-6.2.4.14-14.el8.x86_64.rpm
>> kmod-kvdo-6.2.4.26-76.el8.x86_64.rpm
>>
>> and installed vdo-support-6.2.4.14-14.el8.x86_64.rpm.  Upon clean-up and
>> redeploy I am now back at Gluster deploy failing at
>>
>> TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on]
>> **
>> task path:
>> /etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
>> failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick':
>> '/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var":
>> "item", "changed": true, "cmd": ["gluster", "volume", "heal", "engine",
>> "granular-entry-heal", "enable"], "delta": "0:00:10.098573", "end":
>> "2021-01-11 19:27:05.333720", "item": {"arbiter": 0, "brick":
>> "/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero
>> return code", "rc": 107, "start": "2021-01-11 19:26:55.235147", "stderr":
>> "", "stderr_lines": [], "stdout": "One or more bricks could be down. Please
>> execute the command again after bringing all bricks online and finishing
>> any pending heals\nVolume heal failed.", "stdout_lines": ["One or more
>> bricks could be down. Please execute the command again after bringing all
>> bricks online and finishing any pending heals", "Volume heal failed."]}
>> failed: [fmov1n1.sn.dtcorp.com] (item={&#x

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-11 Thread Charles Lam
Dear Strahil and Ritesh,

Thank you both.  I am back where I started with:

"One or more bricks could be down. Please execute the command again after 
bringing all bricks online and finishing any pending heals\nVolume heal 
failed.", "stdout_lines": ["One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals", "Volume heal failed."]

Regarding my most recent issue:

"vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR - modprobe: FATAL: 
Module
kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"

Per Strahil's note, I checked for kvdo:

[r...@host1.tld.com conf.d]# rpm -qa | grep vdo
libblockdev-vdo-2.24-1.el8.x86_64
vdo-6.2.3.114-14.el8.x86_64
kmod-kvdo-6.2.2.117-65.el8.x86_64
[r...@host1.tld.com conf.d]#

[r...@host2.tld.com conf.d]# rpm -qa | grep vdo
libblockdev-vdo-2.24-1.el8.x86_64
vdo-6.2.3.114-14.el8.x86_64
kmod-kvdo-6.2.2.117-65.el8.x86_64
[r...@host2.tld.com conf.d]#

[r...@host3.tld.com ~]# rpm -qa | grep vdo
libblockdev-vdo-2.24-1.el8.x86_64
vdo-6.2.3.114-14.el8.x86_64
kmod-kvdo-6.2.2.117-65.el8.x86_64
[r...@host3.tld.com ~]#

I found 
https://unix.stackexchange.com/questions/624011/problem-on-centos-8-with-creating-vdo-kernel-module-kvdo-not-installed
 which pointed to https://bugs.centos.org/view.php?id=17928.  As suggested on 
the CentOS bug tracker I attempted to manually install 

vdo-support-6.2.4.14-14.el8.x86_64
vdo-6.2.4.14-14.el8.x86_64
kmod-kvdo-6.2.3.91-73.el8.x86_64

but there was a dependency that kernel-core be greater than what I was 
installed, so I manually upgraded kernel-core to 
kernel-core-4.18.0-259.el8.x86_64.rpm then upgraded vdo and kmod-kvdo to

vdo-6.2.4.14-14.el8.x86_64.rpm
kmod-kvdo-6.2.4.26-76.el8.x86_64.rpm

and installed vdo-support-6.2.4.14-14.el8.x86_64.rpm.  Upon clean-up and 
redeploy I am now back at Gluster deploy failing at 

TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "engine", 
"granular-entry-heal", "enable"], "delta": "0:00:10.098573", "end": "2021-01-11 
19:27:05.333720", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return 
code", "rc": 107, "start": "2021-01-11 19:26:55.235147", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "data", 
"granular-entry-heal", "enable"], "delta": "0:00:10.099670", "end": "2021-01-11 
19:27:20.564554", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": 
"2021-01-11 19:27:10.464884", "stderr": "", "stderr_lines": [], "stdout": "One 
or more bricks could be down. Please execute the command again after bringing 
all bricks online and finishing any pending heals\nVolume heal failed.", 
"stdout_lines": ["One or more bricks could be down. Please execute the command 
again after bringing all bricks online and finishing any pending heals", 
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", 
"granular-entry-heal", "enable"], "delta": "0:00:10.104624", "end": "2021-01-11 
19:27:35.774230", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero 
return code", "rc": 107, "start": "2021-01-11 19:27:25.669606", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}

NO MORE HOSTS LEFT *

NO MORE HOSTS LEFT *

PLAY RECAP *
fmov1n1.sn.dtcorp.com  : ok=70   changed=29   unreachable=0failed=1

[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2021-01-08 Thread Charles Lam
Dear Strahil,

I have rebuilt everything fresh, switches, hosts, cabling - PHY-SEC shows 512 
for all nvme drives being used as bricks.  Name resolution via /etc/hosts for 
direct connect storage network works for all hosts to all hosts.  I am still 
blocked by the same 

"vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR - modprobe: FATAL: 
Module
kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"

Any further suggestions are MOST appreciated.

Thank you and respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4ZPHST3IYCIPYEXZO27QUOGSLQIRX6K/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Still not able to deploy Gluster on oVirt Node Hyperconverged - same error; 
upgraded to v4.4.4 and "kvdo not installed"

Tried suggestion and per 
https://access.redhat.com/documentation/en-us/red_hat_gluster_storage/3.5/html/administration_guide/volume_option_table
 I also tried "gluster volume heal VOLNAME granular-entry-heal enable" then 
"gluster volume heal VOLNAME" received "transport endpoint is not connected" 
double checked networking, restarted the volume per 
https://access.redhat.com/solutions/5089741 using "gluster volume start VOLNAME 
force" also checked Gluster server and client versions per 
https://access.redhat.com/solutions/5380531 --> self-heal daemon shows process 
ID and local status but not peer status

updated to oVirt v4.4.4 and now am receiving

"vdo: ERROR - Kernel module kvdo not installed\nvdo: ERROR - modprobe: FATAL: 
Module kvdo not found in directory /lib/modules/4.18.0-240.1.1.el8_3.x86_64\n"

This appears to be a recent issue.  I am going to re-image nodes with oVirt 
Node v4.4.4 and rebuild networking and see if that helps, if not I will revert 
to v4.4.2 (most recent successful on this cluster) and see if I can build it 
there.

I am using local direct cable between three host nodes for storage network, 
with statically assigned IPs on local network adapters along with Hosts file 
and 3x /30 subnets for routing.  Management network is DHCP and set up as 
before when successful.  I have confirmed "files" is listed first in 
nsswitch.conf and have not had any issues with ssh to storage network or 
management network --> could anything related to direct cabling be reason for 
peer connection issue with self-heal daemon even though "gluster peer status" 
and "gluster peer probe" are successful?

Thanks again, I will update after rebuilding with oVirt Node v4.4.4

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ICVEYLCC677BYGQ6SJC6FB7YGPACSBPY/


[ovirt-users] Re: New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-21 Thread Charles Lam
Thanks so very much Strahil for your continued assistance!

[root@fmov1n1 conf.d]# gluster pool list
UUIDHostnameState
16e921fb-99d3-4a2e-81e6-ba095dbc14cahost2.fqdn.tld   Connected
d4488961-c854-449a-a211-1593810df52fhost3.fqdn.tld   Connected
f9f9282c-0c1d-405a-a3d3-815e5c6b2606localhost   Connected
[root@fmov1n1 conf.d]# gluster volume list
data
engine
vmstore
[root@fmov1n1 conf.d]# for i in $(gluster volume list); do gluster volume 
status $i; gluster volume info $i; echo 
"###";
 done
Status of volume: data
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick host1.fqdn.tld:/gluster_bricks
/data/data  49153 0  Y   899467
Brick host2.fqdn.tld:/gluster_bricks
/data/data  49153 0  Y   820456
Brick host3.fqdn.tld:/gluster_bricks
/data/data  49153 0  Y   820482
Self-heal Daemon on localhost   N/A   N/AY   897788
Self-heal Daemon on host3.fqdn.tld   N/A   N/AY   820406
Self-heal Daemon on host2.fqdn.tld   N/A   N/AY   820367

Task Status of Volume data
--
There are no active volume tasks


Volume Name: data
Type: Replicate
Volume ID: b4e984c8-7c43-4faa-92e1-84351a645408
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1.fqdn.tld:/gluster_bricks/data/data
Brick2: host2.fqdn.tld:/gluster_bricks/data/data
Brick3: host3.fqdn.tld:/gluster_bricks/data/data
Options Reconfigured:
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
###
Status of volume: engine
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick host1.fqdn.tld:/gluster_bricks
/engine/engine  49152 0  Y   897767
Brick host2.fqdn.tld:/gluster_bricks
/engine/engine  49152 0  Y   820346
Brick host3.fqdn.tld:/gluster_bricks
/engine/engine  49152 0  Y   820385
Self-heal Daemon on localhost   N/A   N/AY   897788
Self-heal Daemon on host3.fqdn.tld   N/A   N/AY   820406
Self-heal Daemon on host2.fqdn.tld   N/A   N/AY   820367

Task Status of Volume engine
--
There are no active volume tasks


Volume Name: engine
Type: Replicate
Volume ID: 75cc04e6-d1cb-4069-aa25-81550b7878db
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: host1.fqdn.tld:/gluster_bricks/engine/engine
Brick2: host2.fqdn.tld:/gluster_bricks/engine/engine
Brick3: host3.fqdn.tld:/gluster_bricks/engine/engine
Options Reconfigured:
performance.strict-o-direct: on
network.ping-timeout: 30
storage.owner-gid: 36
storage.owner-uid: 36
server.event-threads: 4
client.event-threads: 4
cluster.choose-local: off
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 1
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
cluster.quorum-type: auto
cluster.eager-lock: enable
network.remote-dio: off
performance.low-prio-threads: 32
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: on
###
Status of volume: vmstore
Gluster process TCP Port  RDMA Port  Online  Pid
--
Brick host1.fqdn.tld:/gluster_bricks
/vmstore/vmstore49154 0  Y   901139
Brick host2.f

[ovirt-users] New failure Gluster deploy: Set granual-entry-heal on --> Bricks down

2020-12-18 Thread Charles Lam
Dear friends,

Thanks to Donald and Strahil, my earlier Gluster deploy issue was resolved by 
disabling multipath on the nvme drives.  The Gluster deployment is now failing 
on the three node hyperconverged oVirt v4.3.3 deployment at:
   
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67

with:

"stdout": "One or more bricks could be down. Please execute the command
again after bringing all bricks online and finishing any pending heals\nVolume 
heal
failed."

Specifically:

TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine',
'brick': '/gluster_bricks/engine/engine', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"engine", "granular-entry-heal", "enable"],
"delta": "0:00:10.112451", "end": "2020-12-18
19:50:22.818741", "item": {"arbiter": 0, "brick":
"/gluster_bricks/engine/engine", "volname": "engine"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:12.706290", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick':
'/gluster_bricks/data/data', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"data", "granular-entry-heal", "enable"], "delta":
"0:00:10.110165", "end": "2020-12-18 19:50:38.260277",
"item": {"arbiter": 0, "brick":
"/gluster_bricks/data/data", "volname": "data"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:28.150112", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore',
'brick': '/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) =>
{"ansible_loop_var": "item", "changed": true,
"cmd": ["gluster", "volume", "heal",
"vmstore", "granular-entry-heal", "enable"],
"delta": "0:00:10.113203", "end": "2020-12-18
19:50:53.767864", "item": {"arbiter": 0, "brick":
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"},
"msg": "non-zero return code", "rc": 107, "start":
"2020-12-18 19:50:43.654661", "stderr": "",
"stderr_lines": [], "stdout": "One or more bricks could be down.
Please execute the command again after bringing all bricks online and finishing 
any
pending heals\nVolume heal failed.", "stdout_lines": ["One or more
bricks could be down. Please execute the command again after bringing all 
bricks online
and finishing any pending heals", "Volume heal failed."]}

Any suggestions regarding troubleshooting, insight or recommendations for 
reading are greatly appreciated.  I apologize for all the email and am only 
creating this as a separate thread as it is a new, presumably unrelated issue.  
I welcome any recommendations if I can improve my forum etiquette.

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZRER63XZ3XF6HGQV2VNAQ4BKS6ZSHYVP/


[ovirt-users] Re: [EXT] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
Thank you Donald!  Your and Strahil's suggested solutions regarding disabling 
multipath for the nvme drives were correct.  The Gluster deployment progressed 
much further but stalled at
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] 
**
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
with
"stdout": "One or more bricks could be down. Please execute the command again 
after bringing all bricks online and finishing any pending heals\nVolume heal 
failed."

Specifically
TASK [gluster.features/roles/gluster_hci : Set granual-entry-heal on] **
task path: 
/etc/ansible/roles/gluster.features/roles/gluster_hci/tasks/hci_volumes.yml:67
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'engine', 'brick': 
'/gluster_bricks/engine/engine', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "engine", 
"granular-entry-heal", "enable"], "delta": "0:00:10.112451", "end": "2020-12-18 
19:50:22.818741", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/engine/engine", "volname": "engine"}, "msg": "non-zero return 
code", "rc": 107, "start": "2020-12-18 19:50:12.706290", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'data', 'brick': 
'/gluster_bricks/data/data', 'arbiter': 0}) => {"ansible_loop_var": "item", 
"changed": true, "cmd": ["gluster", "volume", "heal", "data", 
"granular-entry-heal", "enable"], "delta": "0:00:10.110165", "end": "2020-12-18 
19:50:38.260277", "item": {"arbiter": 0, "brick": "/gluster_bricks/data/data", 
"volname": "data"}, "msg": "non-zero return code", "rc": 107, "start": 
"2020-12-18 19:50:28.150112", "stderr": "", "stderr_lines": [], "stdout": "One 
or more bricks could be down. Please execute the command again after bringing 
all bricks online and finishing any pending heals\nVolume heal failed.", 
"stdout_lines": ["One or more bricks could be down. Please execute the command 
again after bringing all bricks online and finishing any pending heals", 
"Volume heal failed."]}
failed: [fmov1n1.sn.dtcorp.com] (item={'volname': 'vmstore', 'brick': 
'/gluster_bricks/vmstore/vmstore', 'arbiter': 0}) => {"ansible_loop_var": 
"item", "changed": true, "cmd": ["gluster", "volume", "heal", "vmstore", 
"granular-entry-heal", "enable"], "delta": "0:00:10.113203", "end": "2020-12-18 
19:50:53.767864", "item": {"arbiter": 0, "brick": 
"/gluster_bricks/vmstore/vmstore", "volname": "vmstore"}, "msg": "non-zero 
return code", "rc": 107, "start": "2020-12-18 19:50:43.654661", "stderr": "", 
"stderr_lines": [], "stdout": "One or more bricks could be down. Please execute 
the command again after bringing all bricks online and finishing any pending 
heals\nVolume heal failed.", "stdout_lines": ["One or more bricks could be 
down. Please execute the command again after bringing all bricks online and 
finishing any pending heals", "Volume heal failed."]}

As this is a different issue, I will post a new thread.

Gratefully yours,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EV6DND2FOX3RF2JBB37COW425ZCUVIHL/


[ovirt-users] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
I have been asked if multipath has been disabled for the cluster's nvme drives.

I have not enabled or disabled multipath for the nvme drives.  In Gluster 
deploy Step 4 - Bricks I have checked "Multipath Configuration: Blacklist 
Gluster Devices."  I have not performed any custom setup of nvme drives other 
than wiping them in between deployment attempts.  Below is the output of lsscsi 
and multipath -ll on the first host after failed Gluster deployment and before 
cleanup.

Thanks!  Should I set up multipath?  If so, if you could point me to 
documentation re setup for oVirt.  I still have a lot to learn and appreciate 
any direction.

[root@Host1 conf.d]# lsscsi
[15:0:0:0]   diskATA  DELLBOSS VD  00-0  /dev/sda
[17:0:0:0]   process Marvell  Console  1.01  -
[N:0:33:1]   diskDell Express Flash PM1725b 1.6TB SFF__1/dev/nvme0n1
[N:1:33:1]   diskDell Express Flash PM1725b 1.6TB SFF__1/dev/nvme1n1
[N:2:33:1]   diskDell Express Flash PM1725b 1.6TB SFF__1/dev/nvme2n1
[N:3:33:1]   diskDell Express Flash PM1725b 1.6TB SFF__1/dev/nvme3n1
[N:4:33:1]   diskDell Express Flash PM1725b 1.6TB SFF__1/dev/nvme4n1
[root@Host1 conf.d]# multipath -ll
eui.343756304d702022002538580004 dm-0 NVME,Dell Express Flash PM1725b 1.6TB 
SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 0:33:1:1 nvme0n1 259:1 active ready running
eui.343756304d702054002538580004 dm-1 NVME,Dell Express Flash PM1725b 1.6TB 
SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 1:33:1:1 nvme1n1 259:0 active ready running
eui.343756304d700763002538580004 dm-2 NVME,Dell Express Flash PM1725b 1.6TB 
SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 2:33:1:1 nvme2n1 259:3 active ready running
eui.343756304d702047002538580004 dm-4 NVME,Dell Express Flash PM1725b 1.6TB 
SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 4:33:1:1 nvme4n1 259:4 active ready running
eui.343756304d702046002538580004 dm-3 NVME,Dell Express Flash PM1725b 1.6TB 
SFF
size=1.5T features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=1 status=active
  `- 3:33:1:1 nvme3n1 259:2 active ready running
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/S42IKSHJ7NHFHZDIXYYNTZK7SNWZBJSJ/


[ovirt-users] Re: v4.4.3 Node Cockpit Gluster deploy fails

2020-12-18 Thread Charles Lam
Hi Strahil,

Yes, on each node before deploy I have
- dmsetup remove  for each drive
- wipefs --all --force /dev/nvmeXn1 for each drive
- nvme format -s 1 /dev/nvmeX for each drive (ref: 
https://nvmexpress.org/open-source-nvme-management-utility-nvme-command-line-interface-nvme-cli/)

Then  test using
- pvcreate /dev/nvme0n1 /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 
--test

I have had to comment out the filter in /etc/lvm/lvm.conf or else all drives 
are excluded by filter.

Thank you so very much for your response and any additional insight you may 
have!

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FLCX2WYNRFI377JIWQMTG5OU4N5MNSQM/


[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-05 Thread Charles Lam
Hello,

I am having this same issue and have inserted the three new lines into 
/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_network.py yet 
it still occurs.  I have rebooted the oVirt Node host.  Do I need to insert the 
fix elsewhere or take other action to properly apply this fix?

Thank you very much,

Diamond Tours, Inc.

Charles Lam
13100 Westlinks Terrace, Suite 1, Fort Myers, FL 33913-8625
O: 239. 437.7117 | F: 239.790.1130 | Cell: 239.227.7474
c...@diamondtours.com<mailto:c...@diamondtours.com>

___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3ACOJVSPAQQODV4WTXYXPJQMUOUXXBC2/


[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-04 Thread Charles Lam
Thank you very much  Vinícius for the clarification, I will try and report.

Sincerely,
Charles

On Tue, Feb 4, 2020 at 5:32 PM Vinícius Ferrão 
wrote:

> Hi Charles, I never done it on Gluster. But my hosted engine runs on a
> VLAN too on separate network on top of a LACP bond with the ovirtmgmt
> network.
>
> You must create the file. The contents of the file was on my first email,
> adapt it to your needs and them run the vdsm-tool command.
>
> Sent from my iPhone
>
> On 4 Feb 2020, at 19:29, Charles Lam  wrote:
>
> 
> Thank you so very much Vinícius, I am trying to deploy the hosted engine
> to a Gluster Storage Domain on a separate network with a VLAN.  I had
> utilized the Cockpit Hyperconverged wizard.
>
> Where do I find storage-network.json, and do I then is it best to run the
> deployment from the command line or Cockpit?
>
> Gratefully,
> Charles
>
> On Tue, Feb 4, 2020 at 5:19 PM Vinícius Ferrão 
> wrote:
>
>> Are you trying to deploy the hosted engine to a Storage Domain which is
>> in a separate network with a VLAN?
>>
>> If this is the issue you must inform VDSM the network so it finds the
>> path. This must be informed during the playbook phase where it asks for the
>> shared storage settings.
>>
>> For example:
>>
>> storage-network.json:
>>
>> {"networks": {"storage": {"bonding": "bond0", "bridged": false, "vlan":
>> 192, "ipaddr": "192.168.10.1", "netmask": "255.255.255.248",
>> "defaultRoute": false}}, "bondings": {}, "options": {"connectivityCheck":
>> false}}
>>
>> vdsm-client -f storage-network.json Host setupNetworks
>>
>>
>>
>> Sent from my iPhone
>>
>> > On 4 Feb 2020, at 19:15, "clam2...@gmail.com" 
>> wrote:
>> >
>> > Hello,
>> >
>> > I am having this same issue and have inserted the three new lines from
>> >
>> > https://github.com/ansible/ansible/issues/66858
>> >
>> > into
>> "/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_network.py"
>> yet the issue still occurs when attempting deployement.  I have rebooted
>> the oVirt Node hosts since patching to no avail.  Do I need to insert the
>> fix elsewhere or take other action to properly apply this fix?
>> >
>> > Thank you very much for your assistance.
>> >
>> > Respectfully,
>> > Charles
>> > ___
>> > Users mailing list -- users@ovirt.org
>> > To unsubscribe send an email to users-le...@ovirt.org
>> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
>> > oVirt Code of Conduct:
>> https://www.ovirt.org/community/about/community-guidelines/
>> > List Archives:
>> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKWVUWZBISU75RYCUCNC3DMBE3IX5NRJ/
>>
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4E3324OHZIOKIKWFCU4JTCTJLXRO3MG6/


[ovirt-users] Re: Deploy Hosted Engine fails at "Set VLAN ID at datacenter level"

2020-02-04 Thread Charles Lam
[edit: include users@ovirt.org]

Thank you so very much Vinícius, I am trying to deploy the hosted engine to
a Gluster Storage Domain on a separate network with a VLAN.  I had utilized
the Cockpit Hyperconverged wizard.

Where do I find storage-network.json, and do I then is it best to run the
deployment from the command line or Cockpit?

Gratefully,
Charles

On Tue, Feb 4, 2020 at 5:19 PM Vinícius Ferrão 
wrote:

> Are you trying to deploy the hosted engine to a Storage Domain which is in
> a separate network with a VLAN?
>
> If this is the issue you must inform VDSM the network so it finds the
> path. This must be informed during the playbook phase where it asks for the
> shared storage settings.
>
> For example:
>
> storage-network.json:
>
> {"networks": {"storage": {"bonding": "bond0", "bridged": false, "vlan":
> 192, "ipaddr": "192.168.10.1", "netmask": "255.255.255.248",
> "defaultRoute": false}}, "bondings": {}, "options": {"connectivityCheck":
> false}}
>
> vdsm-client -f storage-network.json Host setupNetworks
>
>
>
> Sent from my iPhone
>
> > On 4 Feb 2020, at 19:15, "clam2...@gmail.com" 
> wrote:
> >
> > Hello,
> >
> > I am having this same issue and have inserted the three new lines from
> >
> > https://github.com/ansible/ansible/issues/66858
> >
> > into
> "/lib/python2.7/site-packages/ansible/modules/cloud/ovirt/ovirt_network.py"
> yet the issue still occurs when attempting deployement.  I have rebooted
> the oVirt Node hosts since patching to no avail.  Do I need to insert the
> fix elsewhere or take other action to properly apply this fix?
> >
> > Thank you very much for your assistance.
> >
> > Respectfully,
> > Charles
> > ___
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/UKWVUWZBISU75RYCUCNC3DMBE3IX5NRJ/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZ676XN3EBOCMZGFY3YOYGIPAPQJXKSS/


Re: [ovirt-users] dracut-initqueue[488]: Warning: Could not boot.

2018-05-07 Thread Charles Lam
Dear Mr. Zanni:

I have had what I believe to be similar issues.  I am in no way an expert
or even knowledgeable, but from experience I have found this to work:
dd if=/tmp/ovirt-node-ng-installer-ovirt-4.2-2018050417.iso of=/dev/sdb
This command assumes that you are on CentOS or similar; assumes that your
USB stick is at "/dev/sdb"; assumes that you have placed the ISO you want
to image the USB stick with at "/tmp/"; and assumes that the name of your
ISO is " ovirt-node-ng-installer-ovirt-4.2-2018050417.iso" (most likely it
will be something different)

Again, I am not that knowledgeable, but I have not found the directions on
the oVirt website imaging Node to a USB stick to work for me, nor have I
had success (with Node only) with the usually great Rufus.

Sincerely,
Charles

On Mon, May 7, 2018 at 4:37 PM Abdelkarim ZANNI  wrote:

> Hello Ovirt users,
>
>
> i'm trying to install ovirt node on a Dell server using a bootable usb.
>
> After i click install ovirt, the following message is displayed on the
> screen;
>
> *dracut-initqueue[488]: Warning: Could not boot.
> *
> I tried different version of Ovirt node without success, can anyone point me 
> to how to sort this out ?
>
> Thank you in advance*.
>
> *
>
> --
> Abdelkarim ZANNI
> GSM: +212671644088
> NUMEA | Rabat - Morocco
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users