Hi David,
 Can you please check the  gdeploy version?
This bug was fixed last year:
https://bugzilla.redhat.com/show_bug.cgi?id=1626513
And is part of: gdeploy-2.0.2-29

On Sun, Jan 27, 2019 at 2:38 PM Leo David <leoa...@gmail.com> wrote:

> Hi,
> It seems so that I had to manually add the sections, to make the scrip
> working:
> [diskcount]
> 12
> [stripesize]
> 256
>
> It looks like ansible is still searching for these sections regardless
> that I have configured "jbod"  in the wizard...
>
> Thanks,
>
> Leo
>
>
>
> On Sun, Jan 27, 2019 at 10:49 AM Leo David <leoa...@gmail.com> wrote:
>
>> Hello Everyone,
>> Using version 4.2.8, ( ovirt-node-ng-installer-4.2.0-2019012606.el7.iso
>> ) for deploying one node instance by following from within CockpitUI seems
>> not to be possible.
>> Here's the generated inventory ( i've specified "jbod"  in the wizard ):
>>
>> #gdeploy configuration generated by cockpit-gluster plugin
>> [hosts]
>> 192.168.80.191
>>
>> [script1:192.168.80.191]
>> action=execute
>> ignore_script_errors=no
>> file=/usr/share/gdeploy/scripts/grafton-sanity-check.sh -d sdb -h
>> 192.168.80.191
>> [disktype]
>> jbod
>> [service1]
>> action=enable
>> service=chronyd
>> [service2]
>> action=restart
>> service=chronyd
>> [shell2]
>> action=execute
>> command=vdsm-tool configure --force
>> [script3]
>> action=execute
>> file=/usr/share/gdeploy/scripts/blacklist_all_disks.sh
>> ignore_script_errors=no
>> [pv1:192.168.80.191]
>> action=create
>> devices=sdb
>> ignore_pv_errors=no
>> [vg1:192.168.80.191]
>> action=create
>> vgname=gluster_vg_sdb
>> pvname=sdb
>> ignore_vg_errors=no
>> [lv1:192.168.80.191]
>> action=create
>> lvname=gluster_lv_engine
>> ignore_lv_errors=no
>> vgname=gluster_vg_sdb
>> mount=/gluster_bricks/engine
>> size=230GB
>> lvtype=thick
>> [selinux]
>> yes
>> [service3]
>> action=restart
>> service=glusterd
>> slice_setup=yes
>> [firewalld]
>> action=add
>>
>> ports=111/tcp,2049/tcp,54321/tcp,5900/tcp,5900-6923/tcp,5666/tcp,16514/tcp,54322/tcp
>> services=glusterfs
>> [script2]
>> action=execute
>> file=/usr/share/gdeploy/scripts/disable-gluster-hooks.sh
>> [shell3]
>> action=execute
>> command=usermod -a -G gluster qemu
>> [volume1]
>> action=create
>> volname=engine
>> transport=tcp
>>
>> key=storage.owner-uid,storage.owner-gid,features.shard,performance.low-prio-threads,performance.strict-o-direct,network.remote-dio,network.ping-timeout,user.cifs,nfs.disable,performance.quick-read,performance.read-ahead,performance.io-cache,cluster.eager-lock
>> value=36,36,on,32,on,off,30,off,on,off,off,off,enable
>> brick_dirs=192.168.80.191:/gluster_bricks/engine/engine
>> ignore_volume_errors=no
>>
>> It does not get to finish,  throwing the following error:
>>
>> PLAY [gluster_servers]
>> *********************************************************
>> TASK [Create volume group on the disks]
>> ****************************************
>> changed: [192.168.80.191] => (item={u'brick': u'/dev/sdb', u'vg':
>> u'gluster_vg_sdb'})
>> PLAY RECAP
>> *********************************************************************
>> 192.168.80.191             : ok=1    changed=1    unreachable=0
>> failed=0
>> *Error: Section diskcount not found in the configuration file*
>>
>> Any thoughts ?
>>
>>
>>
>>
>>
>>
>> --
>> Best regards, Leo David
>>
>
>
> --
> Best regards, Leo David
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/Z2X45A6V6WQC3DBH6DGENJGBAVKNPY5T/
>


-- 


Thanks,
Gobinda
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P4MIBCHPGTYYIJ5NO736VAW37JXXH6MY/

Reply via email to