[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Gianluca Cecchi
They should be in the log files I attached to the bugzilla, if you download
the tar,gz file

Gianluca

On Fri, Jul 17, 2020 at 4:45 PM Strahil Nikolov 
wrote:

> Can you provide the target's facts in the bug report ?
>
> Best Regards,
> Strahil Nikolov
>
> На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi <
> gianluca.cec...@gmail.com> написа:
> >On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
> >dominique.desche...@gcgenicom.com> wrote:
> >
> >> Hi,
> >>
> >> I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso
> >> (July 8).
> >>
> >> I just saw that there is a new version of July 13 (4.4.1-2020071311).
> >I will try it.
> >>
> >>
> >>
> >No. See my thread I referred and I'm using the July 13 version.
> >Follow the bugzilla I have opened:
> >https://bugzilla.redhat.com/show_bug.cgi?id=1858234
> >
> >Gianluca
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/UXSSLGAAHN3DNACZLI4ELREPCI7NO75O/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Strahil Nikolov via Users
Can you provide the target's facts in the bug report ?

Best Regards,
Strahil Nikolov

На 17 юли 2020 г. 14:48:39 GMT+03:00, Gianluca Cecchi 
 написа:
>On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
>dominique.desche...@gcgenicom.com> wrote:
>
>> Hi,
>>
>> I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso
>> (July 8).
>>
>> I just saw that there is a new version of July 13 (4.4.1-2020071311).
>I will try it.
>>
>>
>>
>No. See my thread I referred and I'm using the July 13 version.
>Follow the bugzilla I have opened:
>https://bugzilla.redhat.com/show_bug.cgi?id=1858234
>
>Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/5I3CZMMWN2HD5SC4PVZQ5TKZQHDH3BPR/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Gianluca Cecchi
On Fri, Jul 17, 2020 at 1:34 PM Dominique Deschênes <
dominique.desche...@gcgenicom.com> wrote:

> Hi,
>
> I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso
> (July 8).
>
> I just saw that there is a new version of July 13 (4.4.1-2020071311). I will 
> try it.
>
>
>
No. See my thread I referred and I'm using the July 13 version.
Follow the bugzilla I have opened:
https://bugzilla.redhat.com/show_bug.cgi?id=1858234

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RP5KWRXTJ2G6HVVRL2IQ5MXIMJTMXFX5/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Dominique Deschênes

Hi,

I use ovirt ISO file ovirt-node-ng-installer-4.4.1-2020070811.el8.iso (July 8).
I just saw that there is a new version of July 13 (4.4.1-2020071311). I will 
try it.


Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: Strahil Nikolov (hunter86...@yahoo.com)
Date: 17/07/20 04:03
À: Dominique Deschênes (dominique.desche...@gcgenicom.com), clam2...@gmail.com, 
users@ovirt.org
Objet: Re: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster 
deploy fails insufficient free space no matter how small the volume is set

What version of CentOS 8 are you using -> Stream or regular, version ?

Best Regards,
Strahil Nikolov

На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes" 
 написа:
>
>
>HI,
>Thank you for your answers
>
>I tried to replace the "package" with "dnf".  the installation of the
>gluster seems to work well but I had the similar message during the
>deployment of the Hosted engine.
>
>Here is the error
>
>[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed":
>false, "msg": "The Python 2 yum module is needed for this module. If
>you require Python 3 support use the `dnf` Ansible module instead."}
>
>
>
>
>
>Dominique Deschênes
>Ingénieur chargé de projets, Responsable TI
>816, boulevard Guimond, Longueuil J4G 1T5
> 450 670-8383 x105  450 670-2259
>
>
>        
>
>- Message reçu -----
>De: clam2...@gmail.com
>Date: 16/07/20 13:40
>À: users@ovirt.org
>Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged
>Gluster deploy fails insufficient free space no matter how small the
>volume is set
>
>Dear Strahil, Dominique and Edward:
>
>I reimaged the three hosts with
>ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure
>everything was stock (I had upgraded from v4.4) and attempted a
>redeploy with all suggested changes EXCEPT replacing "package" with
>"dnf" --> same failure.  I then made Strahil's recommended replacement
>of "package" with "dnf" and the Gluster deployment succeeded through
>that section of main.yml only to fail a little later at:
>
>- name: Install python-yaml package for Debian systems
> package:
>   name: python-yaml
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>I found this notable given that I had not replaced "package" with "dnf"
>in the prior section:
>
>- name: Change to Install lvm tools for debian systems.
> package:
>   name: thin-provisioning-tools
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>and deployment had not failed here.  Anyhow, I deleted the two Debian
>statements as I am deploying from Node (CentOS based), cleaned up,
>cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all
>--force /dev/nvme0n1 /dev/nvmeXn1 ...')  and redeployed again.  This
>time Gluster deployment seemed to execute main.yml OK only to fail in a
>new file, vdo_create.yml:
>
>TASK [gluster.infra/roles/backend_setup : Install VDO dependencies]
>
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>Expecting that this might continue, I have been looking into the
>documentation of how "package" works and if I can find a root cause for
>this rather than reviewing n *.yml files and replacing "package" with
>"dnf" in all of them.  Thank you VERY much to Strahil for helping me!
>
>If Strahil or anyone else has any additional troubleshooting tips,
>suggestions, insight or solutions I am all ears.  I will continue to
>update as I progress.
>
>Respectfully,
>Charles
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an 

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Gianluca Cecchi
On Fri, Jul 17, 2020 at 10:09 AM Strahil Nikolov via Users 
wrote:

> What version of CentOS 8 are you using -> Stream or regular, version ?
>
> Best Regards,
> Strahil Nikolov


Strahil, see the other thread I have just opened.
It happens also to me with latest oVirt node iso for 4.4.1.1 dated 13/07
In my opinion there is a major problem with all yaml files using yum and
package modules:
- yum because it expects python2 that is missing
- package because it doesn't autodetect dnf and tries yum and fails for the
same reason above

A possible workaround for not modifying all yaml files is to install
python2; I don't know if a channel could be enabled to have python2 back

Gianluca
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OFJ2SFFFQUW5QGQCOXTFR2XJDVR4WOCF/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-17 Thread Strahil Nikolov via Users
What version of CentOS 8 are you using -> Stream or regular, version ?

Best Regards,
Strahil Nikolov

На 16 юли 2020 г. 21:07:57 GMT+03:00, "Dominique Deschênes" 
 написа:
>
>
>HI, 
>Thank you for your answers
>
>I tried to replace the "package" with "dnf".  the installation of the
>gluster seems to work well but I had the similar message during the
>deployment of the Hosted engine.
>
>Here is the error
>
>[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed":
>false, "msg": "The Python 2 yum module is needed for this module. If
>you require Python 3 support use the `dnf` Ansible module instead."} 
>
>
>
>
>
>Dominique Deschênes
>Ingénieur chargé de projets, Responsable TI
>816, boulevard Guimond, Longueuil J4G 1T5
> 450 670-8383 x105  450 670-2259
>
>
>          
>
>- Message reçu -----
>De: clam2...@gmail.com
>Date: 16/07/20 13:40
>À: users@ovirt.org
>Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged
>Gluster deploy fails insufficient free space no matter how small the
>volume is set
>
>Dear Strahil, Dominique and Edward:
>
>I reimaged the three hosts with
>ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure
>everything was stock (I had upgraded from v4.4) and attempted a
>redeploy with all suggested changes EXCEPT replacing "package" with
>"dnf" --> same failure.  I then made Strahil's recommended replacement
>of "package" with "dnf" and the Gluster deployment succeeded through
>that section of main.yml only to fail a little later at:
>
>- name: Install python-yaml package for Debian systems
> package:
>   name: python-yaml
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>I found this notable given that I had not replaced "package" with "dnf"
>in the prior section:
>
>- name: Change to Install lvm tools for debian systems.
> package:
>   name: thin-provisioning-tools
>   state: present
> when: ansible_distribution == "Debian" or ansible_distribution ==
>"Ubuntu"
>
>and deployment had not failed here.  Anyhow, I deleted the two Debian
>statements as I am deploying from Node (CentOS based), cleaned up,
>cleaned up my drives ('dmsetup remove eui.xxx...' and 'wipefs --all
>--force /dev/nvme0n1 /dev/nvmeXn1 ...')  and redeployed again.  This
>time Gluster deployment seemed to execute main.yml OK only to fail in a
>new file, vdo_create.yml:
>
>TASK [gluster.infra/roles/backend_setup : Install VDO dependencies]
>
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>Expecting that this might continue, I have been looking into the
>documentation of how "package" works and if I can find a root cause for
>this rather than reviewing n *.yml files and replacing "package" with
>"dnf" in all of them.  Thank you VERY much to Strahil for helping me!
>
>If Strahil or anyone else has any additional troubleshooting tips,
>suggestions, insight or solutions I am all ears.  I will continue to
>update as I progress.
>
>Respectfully,
>Charles
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRECMZLZXZQT5IWR4PFSG/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/KABMZ6EJIEQJMIYCG6MUPTCKHIZOHI65/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread Dominique Deschênes


HI, 
Thank you for your answers

I tried to replace the "package" with "dnf".  the installation of the gluster 
seems to work well but I had the similar message during the deployment of the 
Hosted engine.

Here is the error

[ ERROR ] fatal: [localhost]: FAILED! => {"attempts": 10, "changed": false, 
"msg": "The Python 2 yum module is needed for this module. If you require 
Python 3 support use the `dnf` Ansible module instead."} 





Dominique Deschênes
Ingénieur chargé de projets, Responsable TI
816, boulevard Guimond, Longueuil J4G 1T5
 450 670-8383 x105  450 670-2259


          

- Message reçu -
De: clam2...@gmail.com
Date: 16/07/20 13:40
À: users@ovirt.org
Objet: [ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster 
deploy fails insufficient free space no matter how small the volume is set

Dear Strahil, Dominique and Edward:

I reimaged the three hosts with 
ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was 
stock (I had upgraded from v4.4) and attempted a redeploy with all suggested 
changes EXCEPT replacing "package" with "dnf" --> same failure.  I then made 
Strahil's recommended replacement of "package" with "dnf" and the Gluster 
deployment succeeded through that section of main.yml only to fail a little 
later at:

- name: Install python-yaml package for Debian systems
 package:
   name: python-yaml
   state: present
 when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

I found this notable given that I had not replaced "package" with "dnf" in the 
prior section:

- name: Change to Install lvm tools for debian systems.
 package:
   name: thin-provisioning-tools
   state: present
 when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

and deployment had not failed here.  Anyhow, I deleted the two Debian 
statements as I am deploying from Node (CentOS based), cleaned up, cleaned up 
my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 
/dev/nvmeXn1 ...')  and redeployed again.  This time Gluster deployment seemed 
to execute main.yml OK only to fail in a new file, vdo_create.yml:

TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}

Expecting that this might continue, I have been looking into the documentation 
of how "package" works and if I can find a root cause for this rather than 
reviewing n *.yml files and replacing "package" with "dnf" in all of them.  
Thank you VERY much to Strahil for helping me!

If Strahil or anyone else has any additional troubleshooting tips, suggestions, 
insight or solutions I am all ears.  I will continue to update as I progress.

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRECMZLZXZQT5IWR4PFSG/


___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/OGMHY4KGT45UC5FPF7HHH53YJ62IKFC4/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread clam2718
Dear Strahil, Dominique and Edward:

I reimaged the three hosts with 
ovirt-node-ng-installer-4.4.1-2020071311.el8.iso just to be sure everything was 
stock (I had upgraded from v4.4) and attempted a redeploy with all suggested 
changes EXCEPT replacing "package" with "dnf" --> same failure.  I then made 
Strahil's recommended replacement of "package" with "dnf" and the Gluster 
deployment succeeded through that section of main.yml only to fail a little 
later at:

- name: Install python-yaml package for Debian systems
  package:
name: python-yaml
state: present
  when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

I found this notable given that I had not replaced "package" with "dnf" in the 
prior section:

- name: Change to Install lvm tools for debian systems.
  package:
name: thin-provisioning-tools
state: present
  when: ansible_distribution == "Debian" or ansible_distribution == "Ubuntu"

and deployment had not failed here.  Anyhow, I deleted the two Debian 
statements as I am deploying from Node (CentOS based), cleaned up, cleaned up 
my drives ('dmsetup remove eui.xxx...' and 'wipefs --all --force /dev/nvme0n1 
/dev/nvmeXn1 ...')  and redeployed again.  This time Gluster deployment seemed 
to execute main.yml OK only to fail in a new file, vdo_create.yml:

TASK [gluster.infra/roles/backend_setup : Install VDO dependencies] 
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/vdo_create.yml:26
fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}

Expecting that this might continue, I have been looking into the documentation 
of how "package" works and if I can find a root cause for this rather than 
reviewing n *.yml files and replacing "package" with "dnf" in all of them.  
Thank you VERY much to Strahil for helping me!

If Strahil or anyone else has any additional troubleshooting tips, suggestions, 
insight or solutions I am all ears.  I will continue to update as I progress.

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/3JTZX2OF4JTGRECMZLZXZQT5IWR4PFSG/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread Strahil Nikolov via Users
Have you tried to replace 'package' with 'dnf' in 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml (somewhere 
around line 33).

Best Regards,
Strahil Nikolov

На 16 юли 2020 г. 16:30:04 GMT+03:00, dominique.desche...@gcgenicom.com написа:
>I also have this message with the deployment of Gluster. I tried the
>modifications and it doesn't seem to work. Did you succeed ?
>
>here error : 
>
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
>fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The
>Python 2 yum module is needed for this module. If you require Python 3
>support use the `dnf` Ansible module instead."}
>fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The
>Python 2 yum module is needed for this module. If you require Python 3
>support use the `dnf` Ansible module instead."}
>fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The
>Python 2 yum module is needed for this module. If you require Python 3
>support use the `dnf` Ansible module instead."}
>___
>Users mailing list -- users@ovirt.org
>To unsubscribe send an email to users-le...@ovirt.org
>Privacy Statement: https://www.ovirt.org/privacy-policy.html
>oVirt Code of Conduct:
>https://www.ovirt.org/community/about/community-guidelines/
>List Archives:
>https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZVYGT7QUVWGQZERFYQ54I7VYPOM4ALL/
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/K7YEXJ2NYPLED6II2W5ZPCBGAKITDPUI/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread Edward Berger
Same issue with ovirt-node-ng-installer 4.4.1-2020071311.el8 iso
[image: gluster-fail.PNG]

On Thu, Jul 16, 2020 at 9:33 AM  wrote:

> I also have this message with the deployment of Gluster. I tried the
> modifications and it doesn't seem to work. Did you succeed ?
>
> here error :
>
> TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for
> RHEL systems.] ***
> task path:
> /etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
> fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The
> Python 2 yum module is needed for this module. If you require Python 3
> support use the `dnf` Ansible module instead."}
> fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The
> Python 2 yum module is needed for this module. If you require Python 3
> support use the `dnf` Ansible module instead."}
> fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The
> Python 2 yum module is needed for this module. If you require Python 3
> support use the `dnf` Ansible module instead."}
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZVYGT7QUVWGQZERFYQ54I7VYPOM4ALL/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/GV6MIO2V5F75QDFSWEDUBDTY4G4O74FM/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-16 Thread dominique . deschenes
I also have this message with the deployment of Gluster. I tried the 
modifications and it doesn't seem to work. Did you succeed ?

here error : 

TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL 
systems.] ***
task path: 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
fatal: [ovnode2.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 
2 yum module is needed for this module. If you require Python 3 support use the 
`dnf` Ansible module instead."}
fatal: [ovnode1.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 
2 yum module is needed for this module. If you require Python 3 support use the 
`dnf` Ansible module instead."}
fatal: [ovnode3.telecom.lan]: FAILED! => {"changed": false, "msg": "The Python 
2 yum module is needed for this module. If you require Python 3 support use the 
`dnf` Ansible module instead."}
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/VZVYGT7QUVWGQZERFYQ54I7VYPOM4ALL/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-15 Thread Strahil Nikolov via Users
I guess  your only option is to edit 
/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml  and  
replace  'package' with 'dnf' (keep the beginning 2  "spaces" deeper  than  '- 
name' -> just where  "package" starts).

Best Regards,
Strahil Nikolov

На 15 юли 2020 г. 22:39:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you very much Strahil for your continued assistance.  I have
>tried cleaning up and redeploying four additional times and am still
>experiencing the same error.
>
>To Summarize
>
>(1)
>Attempt 1: change gluster_infra_thick_lvs --> size: 100G to size:
>'100%PVS' and change gluster_infra_thinpools --> lvsize: 500G to
>lvsize: '100%PVS'
>Result 1: deployment failed -->
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml:33
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>(2)
>Attempt 2: same as Attempt 1, but substituted 99G for '100%PVS'
>Result 2: same as Result 1
>
>(3)
>Attempt 3: same as Attempt 1, but added
>vars:
>  ansible_python_interpreter: /usr/bin/python3
>Result 3: same as Result 1
>
>(4)
>Attempt 4: reboot all three nodes, same as Attempt 1 but omitted
>previously edited size arguments as I read in documentation at
>https://github.com/gluster/gluster-ansible-infra that the size/lvsize
>arguements for variables gluster_infra_thick_lvs and
>gluster_infra_lv_logicalvols are optional and default to 100% size of
>LV.
>
>At the end of this post are the latest version of the playbook and log
>output.  As best I can tell the nodes are fully updated, default
>installs using verified images of v4.4.1.1.
>
>From /var/log/cockpit/ovirt-dashboard/gluster-deployment.log I see that
>line 33 in task path:
>/etc/ansible/roles/gluster.infra/roles/backend_setup/tasks/main.yml is
>what is causing the deployment to fail at this point
>
>- name: Change to Install lvm tools for RHEL systems.
>  package:
>name: device-mapper-persistent-data
>state: present
>  when: ansible_os_family == 'RedHat'
>
>But package device-mapper-persistent-data is installed:
>
>[root@fmov1n1 ~]# dnf install device-mapper-persistent-data
>Last metadata expiration check: 0:32:10 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Package device-mapper-persistent-data-0.8.5-3.el8.x86_64 is already
>installed.
>Dependencies resolved.
>Nothing to do.
>Complete!
>
>[root@fmov1n1 ~]# dnf info device-mapper-persistent-data
>Last metadata expiration check: 0:31:44 ago on Wed 15 Jul 2020 06:44:19
>PM UTC.
>Installed Packages
>Name : device-mapper-persistent-data
>Version  : 0.8.5
>Release  : 3.el8
>Architecture : x86_64
>Size : 1.4 M
>Source   : device-mapper-persistent-data-0.8.5-3.el8.src.rpm
>Repository   : @System
>Summary  : Device-mapper Persistent Data Tools
>URL  : https://github.com/jthornber/thin-provisioning-tools
>License  : GPLv3+
>Description  : thin-provisioning-tools contains
>check,dump,restore,repair,rmap
>: and metadata_size tools to manage device-mapper thin provisioning
>  : target metadata devices; cache check,dump,metadata_size,restore
>  : and repair tools to manage device-mapper cache metadata devices
>   : are included and era check, dump, restore and invalidate to manage
> : snapshot eras
>
>I can't figure out why Ansible v2.9.10 is not calling DNF.  Ansible DNF
>package is installed:
>
>[root@fmov1n1 modules]# ansible-doc -t module dnf
>> DNF   
>(/usr/lib/python3.6/site-packages/ansible/modules/packaging/os/dnf.py)
>
>Installs, upgrade, removes, and lists packages and groups with the
>`dnf' package
>manager.
>
>  * This module is maintained by The Ansible Core Team
>...
>
>
>I am unsure how to further troubleshoot from here!
>
>Thank you again!!!
>Charles
>
>---
>Latest Gluster Playbook (edited from Wizard output)
>
>hc_nodes:
>  hosts:
>fmov1n1.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-14 Thread Strahil Nikolov via Users
Also,  check on system the LV size  as  it seema that based on your previous 
outputs  the PV names do not match.
You might have now a very large HostedEngine LV which will be a waste of space.

Best Regards,
Strahil Nikolov

На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you Strahil.  I think I edited the oVirt Node Cockpit
>Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed
>in your post and received the following new failure:
>
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>Any further assistance is most appreciated!!!
>
>Respectfully,
>Charles 
>
>---
>Gluster Deployment Ansible Playbook
>
>hc_nodes:
>  hosts:
>fmov1n1.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>  vgname: gluster_vg_nvme2n1
>- path: /gluster_bricks/vmstore
>  lvname: gluster_lv_vmstore
>  vgname: gluster_vg_nvme1n1
>  gluster_infra_vdo:
>- name: vdo_nvme0n1
>  device: /dev/nvme0n1
>  slabsize: 2G
>  logicalsize: 1000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme2n1
>  device: /dev/nvme2n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme1n1
>  device: /dev/nvme1n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>  blacklist_mpath_devices:
>- nvme0n1
>- nvme2n1
>- nvme1n1
>  gluster_infra_thick_lvs:
>- vgname: gluster_vg_nvme0n1
>  lvname: gluster_lv_engine
>  size: '100%PVS'
>  gluster_infra_thinpools:
>- vgname: gluster_vg_nvme2n1
>  thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
>  poolmetadatasize: 3G
>- vgname: gluster_vg_nvme1n1
>  thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
>  poolmetadatasize: 3G
>  gluster_infra_lv_logicalvols:
>- vgname: gluster_vg_nvme2n1
>  thinpool: gluster_thinpool_gluster_vg_nvme2n1
>  lvname: gluster_lv_data
>  lvsize: '100%PVS'
>- vgname: gluster_vg_nvme1n1
>  thinpool: gluster_thinpool_gluster_vg_nvme1n1
>  lvname: gluster_lv_vmstore
>  lvsize: '100%PVS'
>fmov1n2.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>  vgname: gluster_vg_nvme2n1
>- path: /gluster_bricks/vmstore
>  lvname: gluster_lv_vmstore
>  vgname: gluster_vg_nvme1n1
>  gluster_infra_vdo:
>- name: vdo_nvme0n1
>  device: /dev/nvme0n1
>  slabsize: 2G
>  logicalsize: 1000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme2n1
>  device: /dev/nvme2n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme1n1
>  device: /dev/nvme1n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-14 Thread Strahil Nikolov via Users
Based  on 
https://github.com/gluster/gluster-ansible-infra/blob/master/roles/backend_setup/tasks/main.yml
 the  used module  is  package,  but the strange thing is why ansible doesn't 
detect the python3 and dnf.

As far as I remember, you can edit the play before running it .

Maybe this will fix:
1. Go to command line and run:
which python3  

2. Set the 'ansible_python_interpreter' to the value  of the previous step

Most probably you need to convert it to:

vars:
   ansible_python_interpreter=/full/path/to/python3/or/python3

Note that the variable 'ansible_python_interpreter' must be indented to the 
write  with 2 spaces  (no tabs allowed).

Best Regards,
Strahil Nikolov


На 15 юли 2020 г. 0:19:09 GMT+03:00, clam2...@gmail.com написа:
>Thank you Strahil.  I think I edited the oVirt Node Cockpit
>Hyperconverged Wizard Gluster Deployment Ansible playbook as detailed
>in your post and received the following new failure:
>
>TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools
>for RHEL systems.] ***
>fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg":
>"The Python 2 yum module is needed for this module. If you require
>Python 3 support use the `dnf` Ansible module instead."}
>
>Any further assistance is most appreciated!!!
>
>Respectfully,
>Charles 
>
>---
>Gluster Deployment Ansible Playbook
>
>hc_nodes:
>  hosts:
>fmov1n1.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>  vgname: gluster_vg_nvme2n1
>- path: /gluster_bricks/vmstore
>  lvname: gluster_lv_vmstore
>  vgname: gluster_vg_nvme1n1
>  gluster_infra_vdo:
>- name: vdo_nvme0n1
>  device: /dev/nvme0n1
>  slabsize: 2G
>  logicalsize: 1000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme2n1
>  device: /dev/nvme2n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>- name: vdo_nvme1n1
>  device: /dev/nvme1n1
>  slabsize: 32G
>  logicalsize: 5000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  maxDiscardSize: 16M
>  blacklist_mpath_devices:
>- nvme0n1
>- nvme2n1
>- nvme1n1
>  gluster_infra_thick_lvs:
>- vgname: gluster_vg_nvme0n1
>  lvname: gluster_lv_engine
>  size: '100%PVS'
>  gluster_infra_thinpools:
>- vgname: gluster_vg_nvme2n1
>  thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
>  poolmetadatasize: 3G
>- vgname: gluster_vg_nvme1n1
>  thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
>  poolmetadatasize: 3G
>  gluster_infra_lv_logicalvols:
>- vgname: gluster_vg_nvme2n1
>  thinpool: gluster_thinpool_gluster_vg_nvme2n1
>  lvname: gluster_lv_data
>  lvsize: '100%PVS'
>- vgname: gluster_vg_nvme1n1
>  thinpool: gluster_thinpool_gluster_vg_nvme1n1
>  lvname: gluster_lv_vmstore
>  lvsize: '100%PVS'
>fmov1n2.sn.dtcorp.com:
>  gluster_infra_volume_groups:
>- vgname: gluster_vg_nvme0n1
>  pvname: /dev/mapper/vdo_nvme0n1
>- vgname: gluster_vg_nvme2n1
>  pvname: /dev/mapper/vdo_nvme2n1
>- vgname: gluster_vg_nvme1n1
>  pvname: /dev/mapper/vdo_nvme1n1
>  gluster_infra_mount_devices:
>- path: /gluster_bricks/engine
>  lvname: gluster_lv_engine
>  vgname: gluster_vg_nvme0n1
>- path: /gluster_bricks/data
>  lvname: gluster_lv_data
>  vgname: gluster_vg_nvme2n1
>- path: /gluster_bricks/vmstore
>  lvname: gluster_lv_vmstore
>  vgname: gluster_vg_nvme1n1
>  gluster_infra_vdo:
>- name: vdo_nvme0n1
>  device: /dev/nvme0n1
>  slabsize: 2G
>  logicalsize: 1000G
>  blockmapcachesize: 128M
>  emulate512: 'off'
>  writepolicy: auto
>  

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-14 Thread clam2718
Thank you Strahil.  I think I edited the oVirt Node Cockpit Hyperconverged 
Wizard Gluster Deployment Ansible playbook as detailed in your post and 
received the following new failure:

TASK [gluster.infra/roles/backend_setup : Change to Install lvm tools for RHEL 
systems.] ***
fatal: [fmov1n3.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n1.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}
fatal: [fmov1n2.sn.dtcorp.com]: FAILED! => {"changed": false, "msg": "The 
Python 2 yum module is needed for this module. If you require Python 3 support 
use the `dnf` Ansible module instead."}

Any further assistance is most appreciated!!!

Respectfully,
Charles 

---
Gluster Deployment Ansible Playbook

hc_nodes:
  hosts:
fmov1n1.sn.dtcorp.com:
  gluster_infra_volume_groups:
- vgname: gluster_vg_nvme0n1
  pvname: /dev/mapper/vdo_nvme0n1
- vgname: gluster_vg_nvme2n1
  pvname: /dev/mapper/vdo_nvme2n1
- vgname: gluster_vg_nvme1n1
  pvname: /dev/mapper/vdo_nvme1n1
  gluster_infra_mount_devices:
- path: /gluster_bricks/engine
  lvname: gluster_lv_engine
  vgname: gluster_vg_nvme0n1
- path: /gluster_bricks/data
  lvname: gluster_lv_data
  vgname: gluster_vg_nvme2n1
- path: /gluster_bricks/vmstore
  lvname: gluster_lv_vmstore
  vgname: gluster_vg_nvme1n1
  gluster_infra_vdo:
- name: vdo_nvme0n1
  device: /dev/nvme0n1
  slabsize: 2G
  logicalsize: 1000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
- name: vdo_nvme2n1
  device: /dev/nvme2n1
  slabsize: 32G
  logicalsize: 5000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
- name: vdo_nvme1n1
  device: /dev/nvme1n1
  slabsize: 32G
  logicalsize: 5000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
  blacklist_mpath_devices:
- nvme0n1
- nvme2n1
- nvme1n1
  gluster_infra_thick_lvs:
- vgname: gluster_vg_nvme0n1
  lvname: gluster_lv_engine
  size: '100%PVS'
  gluster_infra_thinpools:
- vgname: gluster_vg_nvme2n1
  thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
  poolmetadatasize: 3G
- vgname: gluster_vg_nvme1n1
  thinpoolname: gluster_thinpool_gluster_vg_nvme1n1
  poolmetadatasize: 3G
  gluster_infra_lv_logicalvols:
- vgname: gluster_vg_nvme2n1
  thinpool: gluster_thinpool_gluster_vg_nvme2n1
  lvname: gluster_lv_data
  lvsize: '100%PVS'
- vgname: gluster_vg_nvme1n1
  thinpool: gluster_thinpool_gluster_vg_nvme1n1
  lvname: gluster_lv_vmstore
  lvsize: '100%PVS'
fmov1n2.sn.dtcorp.com:
  gluster_infra_volume_groups:
- vgname: gluster_vg_nvme0n1
  pvname: /dev/mapper/vdo_nvme0n1
- vgname: gluster_vg_nvme2n1
  pvname: /dev/mapper/vdo_nvme2n1
- vgname: gluster_vg_nvme1n1
  pvname: /dev/mapper/vdo_nvme1n1
  gluster_infra_mount_devices:
- path: /gluster_bricks/engine
  lvname: gluster_lv_engine
  vgname: gluster_vg_nvme0n1
- path: /gluster_bricks/data
  lvname: gluster_lv_data
  vgname: gluster_vg_nvme2n1
- path: /gluster_bricks/vmstore
  lvname: gluster_lv_vmstore
  vgname: gluster_vg_nvme1n1
  gluster_infra_vdo:
- name: vdo_nvme0n1
  device: /dev/nvme0n1
  slabsize: 2G
  logicalsize: 1000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
- name: vdo_nvme2n1
  device: /dev/nvme2n1
  slabsize: 32G
  logicalsize: 5000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
- name: vdo_nvme1n1
  device: /dev/nvme1n1
  slabsize: 32G
  logicalsize: 5000G
  blockmapcachesize: 128M
  emulate512: 'off'
  writepolicy: auto
  maxDiscardSize: 16M
  blacklist_mpath_devices:
- nvme0n1
- nvme2n1
- nvme1n1
  gluster_infra_thick_lvs:
- vgname: gluster_vg_nvme0n1
  lvname: gluster_lv_engine
  size: '100%PVS'
  gluster_infra_thinpools:
- vgname: gluster_vg_nvme2n1
  thinpoolname: gluster_thinpool_gluster_vg_nvme2n1
  poolmetadatasize: 3G
 

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-14 Thread Strahil Nikolov via Users


На 14 юли 2020 г. 16:32:42 GMT+03:00, clam2...@gmail.com написа:
>Output of pvdisplay for each of three hosts below.

>  --- Physical volume ---
>  PV Name   /dev/mapper/vdo_nvme0n1
>  VG Name   gluster_vg_nvme0n1
>  PV Size   100.00 GiB / not usable 4.00 MiB
>  Allocatable   yes
>  PE Size   4.00 MiB
>  Total PE  25599
>  Free PE   25599
>  Allocated PE  0
>  PV UUID   gTHFgm-NU5J-LJWJ-DyIb-ecm7-85Cq-OedKeX

You don't have 100G free due to the 'not usable 4.00 MiB'. Select 99G or 
'100%PVS'

Best Regards,
Strahil Nikolov
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/RIIEUFJ2RKNAMCC4DA5ZLUH7MSEHHYIE/


[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-14 Thread clam2718
Output of pvdisplay for each of three hosts below.

Node 1:

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme2n1
  VG Name   gluster_vg_nvme2n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   tKg74P-klP8-o2sX-XCER-wcHf-XW9Q-mFViNT

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme1n1
  VG Name   gluster_vg_nvme1n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   wXyN5p-LaX3-9b9f-3RbH-j1B6-sXfT-UZ0BG7

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme0n1
  VG Name   gluster_vg_nvme0n1
  PV Size   100.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  25599
  Free PE   25599
  Allocated PE  0
  PV UUID   gTHFgm-NU5J-LJWJ-DyIb-ecm7-85Cq-OedKeX

  --- Physical volume ---
  PV Name   /dev/mapper/luks-3890d311-7c61-43ae-98a5-42c0318e735f
  VG Name   onn
  PV Size   <221.92 GiB / not usable 0
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  56811
  Free PE   10897
  Allocated PE  45914
  PV UUID   FqWsAT-hxAO-UCgq-PA7e-m0W1-3Jrw-XGnLf1

---
Node 2:

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme2n1
  VG Name   gluster_vg_nvme2n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   KR4c82-465u-B22g-2Q95-4l81-1urD-iqvBRt

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme1n1
  VG Name   gluster_vg_nvme1n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   sEABVg-tCRU-zW8n-pfPW-p5aj-XbBt-IjsTp1

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme0n1
  VG Name   gluster_vg_nvme0n1
  PV Size   100.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  25599
  Free PE   25599
  Allocated PE  0
  PV UUID   NLRTl5-05ol-6zcH-ZjAS-T82n-hcow-20LYEL

  --- Physical volume ---
  PV Name   /dev/mapper/luks-7d42e806-af06-4a72-96b7-de77f76e562f
  VG Name   onn
  PV Size   <221.92 GiB / not usable 0
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  56811
  Free PE   10897
  Allocated PE  45914
  PV UUID   O07nNl-yd7X-Gh8x-2d4b-lRME-bz21-OjCykI

---
Node 3:

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme2n1
  VG Name   gluster_vg_nvme2n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   4Yji7W-LuIv-Y2Aq-oD8t-wBwO-VaXY-9coNN0

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme1n1
  VG Name   gluster_vg_nvme1n1
  PV Size   1000.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  255999
  Free PE   255999
  Allocated PE  0
  PV UUID   rTEqJ0-SkWm-Ge05-iz97-ZOoT-AdYY-L6uHtN

  --- Physical volume ---
  PV Name   /dev/mapper/vdo_nvme0n1
  VG Name   gluster_vg_nvme0n1
  PV Size   100.00 GiB / not usable 4.00 MiB
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  25599
  Free PE   25599
  Allocated PE  0
  PV UUID   AoJ9h9-vNYG-IgXQ-gSdB-aYWi-Nzl0-JPiQU3

  --- Physical volume ---
  PV Name   /dev/mapper/luks-5ac3e150-55c1-4fc2-acd4-f2861c3d2e0a
  VG Name   onn
  PV Size   <221.92 GiB / not usable 0
  Allocatable   yes
  PE Size   4.00 MiB
  Total PE  56811
  Free PE   10897
  Allocated PE  45914
  PV UUID   N3HLbG-kUIb-5I98-UfZX-eG9A-qnHi-J4tWWi

---

My apologies for the delay (I am UTC-4).  Thanks so very much for your input 
Ritesh!

Respectfully,
Charles
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to 

[ovirt-users] Re: oVirt Node 4.4.1.1 Cockpit Hyperconverged Gluster deploy fails insufficient free space no matter how small the volume is set

2020-07-13 Thread Ritesh Chikatwar
On Tue, Jul 14, 2020 at 3:50 AM  wrote:

> Hi,
>
> Deploying oVirt 4.4.1.1 via Cockpit --> Hosted Engine --> Hyperconverged
> fails at Gluster deployment:
>
> TASK [gluster.infra/roles/backend_setup : Create thick logical volume]
> *
> failed: [fmov1n3.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1',
> 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var":
> "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume
> group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents):
> 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine",
> "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical
> volume 'gluster_lv_engine' failed", "rc": 5}
> failed: [fmov1n1.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1',
> 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var":
> "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume
> group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents):
> 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine",
> "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical
> volume 'gluster_lv_engine' failed", "rc": 5}
> failed: [fmov1n2.sn.dtcorp.com] (item={'vgname': 'gluster_vg_nvme0n1',
> 'lvname': 'gluster_lv_engine', 'size': '100G'}) => {"ansible_index_var":
> "index", "ansible_loop_var": "item", "changed": false, "err": "  Volume
> group \"gluster_vg_nvme0n1\" has insufficient free space (25599 extents):
> 25600 required.\n", "index": 0, "item": {"lvname": "gluster_lv_engine",
> "size": "100G", "vgname": "gluster_vg_nvme0n1"}, "msg": "Creating logical
> volume 'gluster_lv_engine' failed", "rc": 5}
>
> Deployment is on 3 count Dell PowerEdge R740xd with 5 count 1.6TB NVMe
> drives per host.  Deployment is only to three as JBOD, 1 drive per node per
> volume (engine, data, vmstore) utilizing VDO.
>  Thus, deploying even a 100G volume to 1.6TB drive fails with
> "insufficient free space" error.
>
> I suspect this might have to do with the Ansible playbook deploying
> Gluster mishandling the logical volume creation due to the rounding error
> as described here:
> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/nofreeext
>
> If I can provide any additional information, logs, etc. please ask.  Also,
> if anyone has experience/suggestions with Gluster config for hyperconverged
> setup on NVMe drives I would greatly appreciate any pearls of wisdom.
>

Can you provide the output of pvdisplay command on all hosts.


>
> Thank you so very much for any assistance!
> Charles
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/privacy-policy.html
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/3AARZD4VBNNHWNWRCVD2QNWQZJYY5AL5/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/privacy-policy.html
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/C4CXHDAC2ENZ72VGWMV6VEX6WK43EPT7/