+Sachidananda URS <s...@redhat.com>

On Wed, May 22, 2019 at 1:14 AM <mich...@wanderingmad.com> wrote:

> I'm sorry, i'm still working on my linux knowledge, here is the output of
> my blkid on one of the servers:
>
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.0025385881b40f60: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="pfJiP3-HCgP-gCyQ-UIzT-akGk-vRpV-aySGZ2" TYPE="LVM2_member"
> /dev/mapper/eui.0025385881b40f60p1:
> UUID="Q0fyzN-9q0s-WDLe-r0IA-MFY0-tose-yzZeu2" TYPE="LVM2_member"
>
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S21CNXAG615134H1:
> UUID="lQrtPt-nx0u-P6Or-f2YW-sN2o-jK9I-gp7P2m" TYPE="LVM2_member"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="890feffe-c11b-4c01-b839-a5906ab39ecb" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
> UUID="7049fd2a-788d-44cb-9dc5-7b4c0ee309fb" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="2c541b70-32c5-496e-863f-ea68b50e7671" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="e59a68d5-2b73-487a-ac5e-409e11402ab5"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="d5f53f17-bca1-4cb9-86d5-34a468c062e7"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="40a41b5f-be87-4994-b6ea-793cdfc076a4"
> TYPE="xfs"
>
> #2
> /dev/nvme0n1: PTTYPE="dos"
> /dev/nvme1n1: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="GiBSqT-JJ3r-Tn3X-lzCr-zW3D-F3IE-OpE4Ga" TYPE="LVM2_member"
> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-00000001:
> PTTYPE="dos"
> /dev/sda: PTTYPE="gpt"
> /dev/mapper/nvme.126f-324831323230303337383138-4144415441205358383030304e50-00000001p1:
> UUID="JBhj79-Uk0E-DdLE-Ibof-VwBq-T5nZ-F8d57O" TYPE="LVM2_member"
> /dev/sdb: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B: PTTYPE="dos"
> /dev/mapper/Samsung_SSD_860_EVO_1TB_S3Z8NB0K843638B1:
> UUID="6yp5YM-D1be-M27p-AEF5-w1pv-uXNF-2vkiJZ" TYPE="LVM2_member"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="9643695c-0ace-4cba-a42c-3f337a7d5133" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="79f5bacc-cbe7-4b67-be05-414f68818f41" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme1-lv_gluster_nvme1:
> UUID="2438a550-5fb4-48f4-a5ef-5cff5e7d5ba8" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="5bb67f61-9d14-4d0b-8aa4-ae3905276797"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="732f939c-f133-4e48-8dc8-c9d21dbc0853"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="f55082ca-1269-4477-9bf8-7190f1add9ef"
> TYPE="xfs"
>
> #3
> /dev/nvme1n1: UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
> /dev/nvme0n1: PTTYPE="dos"
> /dev/mapper/nvme.c0a9-313931304531454644323630-4354353030503153534438-00000001:
> UUID="8f1dc44e-f35f-438a-9abc-54757fd7ef32" TYPE="vdo"
> /dev/mapper/eui.6479a71892882020: PTTYPE="dos"
> /dev/mapper/eui.6479a71892882020p1:
> UUID="FwBRJJ-ofHI-1kHq-uEf1-H3Fn-SQcw-qWYvmL" TYPE="LVM2_member"
> /dev/sda: PTTYPE="gpt"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A: PTTYPE="gpt"
> /dev/mapper/Samsung_SSD_850_EVO_1TB_S2RENX0J302798A1:
> UUID="weCmOq-VZ1a-Itf5-SOIS-AYLp-Ud5N-S1H2bR" TYPE="LVM2_member"
> PARTUUID="920ef5fd-e525-4cf0-99d5-3951d3013c19"
> /dev/mapper/vg_gluster_ssd-lv_gluster_ssd:
> UUID="fbaffbde-74f0-4e4a-9564-64ca84398cde" TYPE="vdo"
> /dev/mapper/vg_gluster_nvme2-lv_gluster_nvme2:
> UUID="ae0bd2ad-7da9-485b-824a-72038571c5ba" TYPE="vdo"
> /dev/mapper/vdo_gluster_ssd: UUID="f0f56784-bc71-46c7-8bfe-6b71327c87c9"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme1: UUID="0ddc1180-f228-4209-82f1-1607a46aed1f"
> TYPE="xfs"
> /dev/mapper/vdo_gluster_nvme2: UUID="bcb7144a-6ce0-4b3f-9537-f465c46d4843"
> TYPE="xfs"
>
> I don't have any errors on mount until I reboot, and once I reboot it
> takes ~6hrs for everything to work 100% since I have to delete the mount
> commands out of stab for the 3 gluster volumes and reboot.  I'da rather
> wait until the next update to do that.
>
> I don't have a variable file or playbook since I made the storage
> manually, I stopped using the playbook since at that point I couldn't
> enable RDMA or over-provision the disks correctly unless I made them
> manually.  But as I said, this is something in 4.3.3 as if I go back to
> 4.3.2 I can reboot no problem.
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/EDGJVIYPMHN5HYARBNCN36NRSTKMSLLW/
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NTZWELJJBXMUKBBNMNBQUHMCXCZIEU5Y/

Reply via email to