[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-26 Thread Sachidananda URS
On Mon, May 27, 2019 at 9:41 AM  wrote:

> I made them manually.  First created the LVM drives, then the VDO devices,
> then gluster volumes
>

In that case you must add these mount options (
inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service)
manually into fstab.
gluster-ansible would have added it if you had done end-to-end deployment
or declared the necessary variables.

-sac
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/A4X6RNSL6IK3G7YEL23CQDU7KMDVUNRY/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-26 Thread michael
I made them manually.  First created the LVM drives, then the VDO devices, then 
gluster volumes
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/PVU6V6YY34XSX2NC5TKTU5YD4RAU4S7X/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread Sachidananda URS
On Wed, May 22, 2019 at 11:26 AM Sahina Bose  wrote:

> +Sachidananda URS 
>
> On Wed, May 22, 2019 at 1:14 AM  wrote:
>
>> I'm sorry, i'm still working on my linux knowledge, here is the output of
>> my blkid on one of the servers:
>>
>> /dev/nvme0n1: PTTYPE="dos"
>> /dev/nvme1n1: PTTYPE="dos"
>> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
>> /dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
>> /dev/mapper/eui.6479a71892882020p1:
>> UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
>> /dev/mapper/eui.0025385881b40f60p1:
>> UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"
>>
>> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
>> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1:
>> UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
>> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
>> UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
>> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
>> UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
>> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
>> UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
>> /dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5"
>> TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme1:
>> UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7" TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme2:
>> UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4" TYPE="xfs"
>>
>> #2
>> /dev/nvme0n1: PTTYPE="dos"
>> /dev/nvme1n1: PTTYPE="dos"
>> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
>> /dev/mapper/eui.6479a71892882020p1:
>> UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
>> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001:
>> PTTYPE="dos"
>> /dev/sda: PTTYPE="gpt"
>> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001p1:
>> UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
>> /dev/sdb: PTTYPE="dos"
>> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
>> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1:
>> UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
>> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
>> UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
>> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
>> UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
>> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
>> UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
>> /dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797"
>> TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme1:
>> UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853" TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme2:
>> UUID="f55082ca-1269-4477-9bf8-7190f1add9ef" TYPE="xfs"
>>
>> #3
>> /dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
>> /dev/nvme0n1: PTTYPE="dos"
>> /dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-0001:
>> UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
>> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
>> /dev/mapper/eui.6479a71892882020p1:
>> UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
>> /dev/sda: PTTYPE="gpt"
>> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
>> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1:
>> UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member"
>> PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
>> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
>> UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
>> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
>> UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
>> /dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9"
>> TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme1:
>> UUID="0ddc1180-f228-4209-82f1-1607a46aed1f" TYPE="xfs"
>> /dev/mapper/vdo_gluster_nvme2:
>> UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843" TYPE="xfs"
>>
>> I don't have any errors on mount until I reboot, and once I reboot it
>> takes ~6hrs for everything to work 100% since I have to delete the mount
>> commands out of stab for the 3 gluster volumes and reboot.  I'da rather
>> wait until the next update to do that.
>>
>> I don't have a variable file or playbook since I made the storage
>> manually, I stopped using the playbook since at that point I couldn't
>> enable RDMA or over-provision the disks correctly unless I made them
>> manually.  But as I said, this is something in 4.3.3 as if I go back to
>> 4.3.2 I can reboot no problem.
>>
>
Ah okay. since you are using VDO, the fstab entry should look something
like this:

UUID=4f1f2e90-5a22-4995-9dd4-7ab9a7ddb438 /gluster_bricks/engine xfs
inode64,noatime,nodiratime,_netdev,x-systemd.device-timeout=0,x-systemd.requires=vdo.service
0 0

That is what gluster-ansible will be adding to the fstab file. This entry
will be added if you had used the
variable gluster_infra_vdo ... How did you create your filesystem, is it
manually or using the deploy script?

Can you please share your variab

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread Sahina Bose
+Sachidananda URS 

On Wed, May 22, 2019 at 1:14 AM  wrote:

> I'm sorry, i'm still working on my linux knowledge, here is the output of
> my blkid on one of the servers:
>
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
> /dev/mapper/eui.0025385881b40f60p1:
> UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"
>
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1:
> UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
> UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4"
> TYPE="xfs"
>
> #2
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001:
> PTTYPE="dos"
> /dev/sda: PTTYPE="gpt"
> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001p1:
> UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
> /dev/sdb: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1:
> UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
> UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="f55082ca-1269-4477-9bf8-7190f1add9ef"
> TYPE="xfs"
>
> #3
> /dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
> /dev/nvme0n1: PTTYPE="dos"
> /dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-0001:
> UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
> /dev/sda: PTTYPE="gpt"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1:
> UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member"
> PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="0ddc1180-f228-4209-82f1-1607a46aed1f"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843"
> TYPE="xfs"
>
> I don't have any errors on mount until I reboot, and once I reboot it
> takes ~6hrs for everything to work 100% since I have to delete the mount
> commands out of stab for the 3 gluster volumes and reboot.  I'da rather
> wait until the next update to do that.
>
> I don't have a variable file or playbook since I made the storage
> manually, I stopped using the playbook since at that point I couldn't
> enable RDMA or over-provision the disks correctly unless I made them
> manually.  But as I said, this is something in 4.3.3 as if I go back to
> 4.3.2 I can reboot no problem.
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDGJVIYPMHN5HYARBNCN36NRSTKMSLLW/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread Strahil Nikolov
 Do you use VDO ?If yes, consider setting up systemd ".mount" units, as this is 
the only way to setup dependencies.
Best Regards,Strahil Nikolov

В вторник, 21 май 2019 г., 22:44:06 ч. Гринуич+3, mich...@wanderingmad.com 
 написа:  
 
 I'm sorry, i'm still working on my linux knowledge, here is the output of my 
blkid on one of the servers:

/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
/dev/mapper/eui.0025385881b40f60p1: 
UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"

/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1: 
UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4" 
TYPE="xfs"

#2
/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001:
 PTTYPE="dos"
/dev/sda: PTTYPE="gpt"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001p1:
 UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
/dev/sdb: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1: 
UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="f55082ca-1269-4477-9bf8-7190f1add9ef" 
TYPE="xfs"

#3
/dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/nvme0n1: PTTYPE="dos"
/dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-0001: 
UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
/dev/sda: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1: 
UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member" 
PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="0ddc1180-f228-4209-82f1-1607a46aed1f" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843" 
TYPE="xfs"

I don't have any errors on mount until I reboot, and once I reboot it takes 
~6hrs for everything to work 100% since I have to delete the mount commands out 
of stab for the 3 gluster volumes and reboot.  I'da rather wait until the next 
update to do that.

I don't have a variable file or playbook since I made the storage manually, I 
stopped using the playbook since at that point I couldn't enable RDMA or 
over-provision the disks correctly unless I made them manually.  But as I said, 
this is something in 4.3.3 as if I go back to 4.3.2 I can reboot no problem.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDGJVIYPMHN5HYARBNCN36NRSTKMSLLW/
  ___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of 

[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-21 Thread michael
I'm sorry, i'm still working on my linux knowledge, here is the output of my 
blkid on one of the servers:

/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
/dev/mapper/eui.0025385881b40f60p1: 
UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"

/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1: 
UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4" 
TYPE="xfs"

#2
/dev/nvme0n1: PTTYPE="dos"
/dev/nvme1n1: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001:
 PTTYPE="dos"
/dev/sda: PTTYPE="gpt"
/dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-0001p1:
 UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
/dev/sdb: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
/dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1: 
UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
/dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1: 
UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="f55082ca-1269-4477-9bf8-7190f1add9ef" 
TYPE="xfs"

#3
/dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/nvme0n1: PTTYPE="dos"
/dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-0001: 
UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
/dev/mapper/eui.6479a71892882020: PTTYPE="dos"
/dev/mapper/eui.6479a71892882020p1: 
UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
/dev/sda: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
/dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1: 
UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member" 
PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
/dev/mapper/vg_gluster_ssd-lv_gluster_ssd: 
UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
/dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2: 
UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
/dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme1: UUID="0ddc1180-f228-4209-82f1-1607a46aed1f" 
TYPE="xfs"
/dev/mapper/vdo_gluster_nvme2: UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843" 
TYPE="xfs"

I don't have any errors on mount until I reboot, and once I reboot it takes 
~6hrs for everything to work 100% since I have to delete the mount commands out 
of stab for the 3 gluster volumes and reboot.  I'da rather wait until the next 
update to do that.

I don't have a variable file or playbook since I made the storage manually, I 
stopped using the playbook since at that point I couldn't enable RDMA or 
over-provision the disks correctly unless I made them manually.  But as I said, 
this is something in 4.3.3 as if I go back to 4.3.2 I can reboot no problem.
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDGJVIYPMHN5HYARBNCN36NRSTKMSLLW/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-20 Thread Sachidananda URS
On Mon, May 20, 2019 at 11:58 AM Sahina Bose  wrote:

> Adding Sachi
>
> On Thu, May 9, 2019 at 2:01 AM  wrote:
>
>> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
>> Since I updated to 4.3, every reboot the host goes into emergency mode.
>> First few times this happened I re-installed O/S from scratch, but after
>> some digging I found out that the drives it mounts in /etc/fstab cause the
>> problem, specifically these mounts.  All three are single drives, one is an
>> SSD and the other 2 are individual NVME drives.
>>
>> UUID=732f939c-f133-4e48-8dc8-c9d21dbc0853 /gluster_bricks/storage_nvme1
>> auto defaults 0 0
>> UUID=5bb67f61-9d14-4d0b-8aa4-ae3905276797 /gluster_bricks/storage_ssd
>> auto defaults 0 0
>> UUID=f55082ca-1269-4477-9bf8-7190f1add9ef /gluster_bricks/storage_nvme2
>> auto defaults 0 0
>>
>> In order to get the host to actually boot, I have to go to console,
>> delete those mounts, reboot, and then re-add them, and they end up with new
>> UUIDs.  all of these hosts reliably rebooted in 4.2 and earlier, but all
>> the versions of 4.3 have this same problem (I keep updating to hope issue
>> is fixed).
>>
>

Hello Michael,

I need your help in resolving this. I would like to understand if the
environment is
affecting something.

What is the out put of:
# blkid /dev/vgname/lvname
For the three bricks you have.

And also what is the error you see when you run the command
# mount /gluster_bricks/storage_nvme1
# mount /gluster_bricks/storage_ssd

Also can you please attach your variable file and playbook?
In my setup things work fine, which is making it difficult for me to fix.

-sac
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/YDJCWINE3UP4CKGFD3QARN67TBQOHRTQ/


[ovirt-users] Re: oVirt node loses gluster volume UUID after reboot, goes to emergency mode every time I reboot.

2019-05-19 Thread Sahina Bose
Adding Sachi

On Thu, May 9, 2019 at 2:01 AM  wrote:

> This only started to happen with oVirt node 4.3, 4.2 didn't have issue.
> Since I updated to 4.3, every reboot the host goes into emergency mode.
> First few times this happened I re-installed O/S from scratch, but after
> some digging I found out that the drives it mounts in /etc/fstab cause the
> problem, specifically these mounts.  All three are single drives, one is an
> SSD and the other 2 are individual NVME drives.
>
> UUID=732f939c-f133-4e48-8dc8-c9d21dbc0853 /gluster_bricks/storage_nvme1
> auto defaults 0 0
> UUID=5bb67f61-9d14-4d0b-8aa4-ae3905276797 /gluster_bricks/storage_ssd auto
> defaults 0 0
> UUID=f55082ca-1269-4477-9bf8-7190f1add9ef /gluster_bricks/storage_nvme2
> auto defaults 0 0
>
> In order to get the host to actually boot, I have to go to console, delete
> those mounts, reboot, and then re-add them, and they end up with new
> UUIDs.  all of these hosts reliably rebooted in 4.2 and earlier, but all
> the versions of 4.3 have this same problem (I keep updating to hope issue
> is fixed).
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/I4UKZAWPQDXWA47AKTQD43PAUCK2JBJN/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/U6Y36PGDKND7XYYKZI7UII64T4AMBOIL/