I did some checking and my disk is not in a state I expected. (The system 
doesn't even know the VG exists in it's present state)   See the results:
# pv
  PV         VG          Fmt  Attr PSize   PFree 
  /dev/md127 onn_vmh     lvm2 a--  222.44g 43.66g
  /dev/sdd1  gluster_vg3 lvm2 a--   <4.00g <2.00g

# pvs -a
  PV                                                VG          Fmt  Attr PSize 
  PFree 
  /dev/md127                                        onn_vmh     lvm2 a--  
222.44g 43.66g
  /dev/onn_vmh/home                                                  ---       
0      0 
  /dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1                  ---       
0      0 
  /dev/onn_vmh/root                                                  ---       
0      0 
  /dev/onn_vmh/swap                                                  ---       
0      0 
  /dev/onn_vmh/tmp                                                   ---       
0      0 
  /dev/onn_vmh/var                                                   ---       
0      0 
  /dev/onn_vmh/var_crash                                             ---       
0      0 
  /dev/onn_vmh/var_log                                               ---       
0      0 
  /dev/onn_vmh/var_log_audit                                         ---       
0      0 
  /dev/sda1                                                          ---       
0      0 
  /dev/sdb1                                                          ---       
0      0 
  /dev/sdd1                                         gluster_vg3 lvm2 a--   
<4.00g <2.00g
  /dev/sde1                                                          ---       
0      0 

# vgs
  VG          #PV #LV #SN Attr   VSize   VFree 
  gluster_vg3   1   1   0 wz--n-  <4.00g <2.00g
  onn_vmh       1  11   0 wz--n- 222.44g 43.66g

# vgs -a
  VG          #PV #LV #SN Attr   VSize   VFree 
  gluster_vg3   1   1   0 wz--n-  <4.00g <2.00g
  onn_vmh       1  11   0 wz--n- 222.44g 43.66g

# lvs
  LV                                   VG          Attr       LSize   Pool   
Origin                             Data%  Meta%  Move Log Cpy%Sync Convert
  tmpLV                                gluster_vg3 -wi-------   2.00g           
                                                                       
  home                                 onn_vmh     Vwi-aotz--   1.00g pool00    
                                4.79                                   
  ovirt-node-ng-4.2.7.1-0.20181216.0   onn_vmh     Vwi---tz-k 146.60g pool00 
root                                                                      
  ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh     Vwi-aotz-- 146.60g pool00 
ovirt-node-ng-4.2.7.1-0.20181216.0 4.81                                   
  pool00                               onn_vmh     twi-aotz-- 173.60g           
                                7.21   2.30                            
  root                                 onn_vmh     Vwi-a-tz-- 146.60g pool00    
                                2.92                                   
  swap                                 onn_vmh     -wi-ao----   4.00g           
                                                                       
  tmp                                  onn_vmh     Vwi-aotz--   1.00g pool00    
                                53.66                                  
  var                                  onn_vmh     Vwi-aotz--  15.00g pool00    
                                15.75                                  
  var_crash                            onn_vmh     Vwi-aotz--  10.00g pool00    
                                2.86                                   
  var_log                              onn_vmh     Vwi-aotz--   8.00g pool00    
                                14.73                                  
  var_log_audit                        onn_vmh     Vwi-aotz--   2.00g pool00    
                                6.91 

# lvs -a                                  
  LV                                   VG          Attr       LSize   Pool   
Origin                             Data%  Meta%  Move Log Cpy%Sync Convert
  tmpLV                                gluster_vg3 -wi-------   2.00g           
                                                                       
  home                                 onn_vmh     Vwi-aotz--   1.00g pool00    
                                4.79                                   
  [lvol0_pmspare]                      onn_vmh     ewi------- 180.00m           
                                                                       
  ovirt-node-ng-4.2.7.1-0.20181216.0   onn_vmh     Vwi---tz-k 146.60g pool00 
root                                                                      
  ovirt-node-ng-4.2.7.1-0.20181216.0+1 onn_vmh     Vwi-aotz-- 146.60g pool00 
ovirt-node-ng-4.2.7.1-0.20181216.0 4.81                                   
  pool00                               onn_vmh     twi-aotz-- 173.60g           
                                7.21   2.30                            
  [pool00_tdata]                       onn_vmh     Twi-ao---- 173.60g           
                                                                       
  [pool00_tmeta]                       onn_vmh     ewi-ao----   1.00g           
                                                                       
  root                                 onn_vmh     Vwi-a-tz-- 146.60g pool00    
                                2.92                                   
  swap                                 onn_vmh     -wi-ao----   4.00g           
                                                                       
  tmp                                  onn_vmh     Vwi-aotz--   1.00g pool00    
                                53.66                                  
  var                                  onn_vmh     Vwi-aotz--  15.00g pool00    
                                15.75                                  
  var_crash                            onn_vmh     Vwi-aotz--  10.00g pool00    
                                2.86                                   
  var_log                              onn_vmh     Vwi-aotz--   8.00g pool00    
                                14.73                                  
  var_log_audit                        onn_vmh     Vwi-aotz--   2.00g pool00    
                                6.91 

# pvscan                                  
  PV /dev/md127   VG onn_vmh         lvm2 [222.44 GiB / 43.66 GiB free]
  PV /dev/sdd1    VG gluster_vg3     lvm2 [<4.00 GiB / <2.00 GiB free]
  Total: 2 [<226.44 GiB] / in use: 2 [<226.44 GiB] / in no VG: 0 [0   ]

# vgscan
  ACTIVE            '/dev/onn_vmh/pool00' [173.60 GiB] inherit
  ACTIVE            '/dev/onn_vmh/root' [146.60 GiB] inherit
  ACTIVE            '/dev/onn_vmh/home' [1.00 GiB] inherit
  ACTIVE            '/dev/onn_vmh/tmp' [1.00 GiB] inherit
  ACTIVE            '/dev/onn_vmh/var' [15.00 GiB] inherit
  ACTIVE            '/dev/onn_vmh/var_log' [8.00 GiB] inherit
  ACTIVE            '/dev/onn_vmh/var_log_audit' [2.00 GiB] inherit
  ACTIVE            '/dev/onn_vmh/swap' [4.00 GiB] inherit
  inactive          '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0' [146.60 
GiB] inherit
  ACTIVE            '/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1' [146.60 
GiB] inherit
  ACTIVE            '/dev/onn_vmh/var_crash' [10.00 GiB] inherit
  inactive          '/dev/gluster_vg3/tmpLV' [2.00 GiB] inherit

# lvscan
  Reading all physical volumes.  This may take a while...
  Found volume group "onn_vmh" using metadata type lvm2
  Found volume group "gluster_vg3" using metadata type lvm2

I sort of expect I may need to do a restore/rebuild of the disk using the data 
from the lvm backup folder.  I found some interesting articles:  
https://www3.unixrealm.com/repair-a-thin-pool/ and 
https://chappie800.wordpress.com/2017/06/13/lvm-repair-metadata/  I have read 
through them and I'm trying to slowly digest the information in them.  Working 
with thin volumes to do a repair is definitely new to me.  My past experience 
is that they just work as they should.  I am only now learning the hard way 
that thin disks have meta data and a very specific structure.  At least I'm not 
learning LVM and thin at the same time.  Anyway, I digress.  Based on what I 
read I was able to download and install device-mapper-persistent-data rpm to 
the node.  I then went and checked the tools location to see what tools are 
available:  

# cd /usr/bin (checking for the existence of available tools)
# ls | grep pv
fipvlan
pvchange
pvck
pvcreate
pvdisplay
pvmove
pvremove
pvresize
pvs
pvscan

# ls | grep vg
vgcfgbackup
vgcfgrestore
vgchange
vgck
vgconvert
vgcreate
vgdisplay
vgexport
vgextend
vgimport
vgimportclone
vgmerge
vgmknodes
vgreduce
vgremove
vgrename
vgs
vgscan
vgsplit

# ls | grep lv
lvchange
lvconvert
lvcreate
lvdisplay
lvextend
lvm
lvmconf
lvmconfig
lvmdiskscan
lvmdump
lvmetad
lvmpolld
lvmsadc
lvmsar
lvreduce
lvremove
lvrename
lvresize
lvs
lvscan

Based on the scenario here, how do I get my disks in a state that I can both 
find and rebuild the data  for VG and VG metadata / LV and LV metdata ?  
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/FCJPDUDNVCGV234E526WO6CKFVVGPGRY/

Reply via email to