I did some checking and my disk is not in a state I expected. (The system
doesn't even know the VG exists in it's present state) See the results:
# pv
PV VG Fmt Attr PSize PFree
/dev/md127 onn_vmh lvm2 a-- 222.44g 43.66g
/dev/sdd1 gluster_vg3 lvm2 a-- <4.00g <2
> >>>"Try to get all data in advance (before deactivating the VG)".
>
> Can you clarify? What do you mean by this?
Get all necessary info before disabling the VG.
For example:
pvs
vgs
lvs -a
lvdisplay -m /dev/gluster_vg1/lvthinpool
lvdisplay -m /dev/gluster_vg1/lvthinpool_tmeta
lvdisplay -m
Sorry for the multiple posts. I had so many thoughts rolling around in my
head. I'll try to consolidate my questions here and rephrase the last three
responses.
>>>"Try to get all data in advance (before deactivating the VG)".
Can you clarify? What do you mean by this?
>>>"I still can't im
Trust Red Hat :)
At least their approach should be safer.
Of course, you can raise a docu bug but RHEL7 is in such phase that it might
not be fixed unless this is found in v8.
Best Regards,
Strahil NikolovOn Oct 2, 2019 05:43, jeremy_tourvi...@hotmail.com wrote:
>
> http://man7.org/linux/man-pag
Hm...
It's strage it doesn't detect the VG , but could be related to the issue.
Accordi g to this:
lvthinpool_tmeta {
id = "WBut10-rAOP-FzA7-bJvr-ZdxL-lB70-jzz1Tv"
status = ["READ", "WRITE"]
flags = []
creation_time = 1545495487 # 2018-12-22 10:18:07 -0600
creation_host = "vmh.cyber-range.lan"
se
Is this an oVirt Node or a regular CentOS/RHEL ?On Oct 2, 2019 05:06,
jeremy_tourvi...@hotmail.com wrote:
>
> Here is my fstab file:
> # /etc/fstab
> # Created by anaconda on Fri Dec 21 22:26:32 2018
> #
> # Accessible filesystems, by reference, are maintained under '/dev/disk'
> # See man pa
Try to get all data in advance (before deactivating the VG).
I still can't imagine why the VG will disappear. Try with 'pvscan --cache' to
redetect the PV.
Afrer all , all VG info is in the PVs' headers and should be visible no matter
the VG is deactivated or not.
Best Regards,
Strahil Nikolov
http://man7.org/linux/man-pages/man7/lvmthin.7.html
Command to repair a thin pool:
lvconvert --repair VG/ThinPoolLV
Repair performs the following steps:
1. Creates a new, repaired copy of the metadata.
lvconvert runs the thin_repair command to read damaged metadata fr
Here is my fstab file:
# /etc/fstab
# Created by anaconda on Fri Dec 21 22:26:32 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/onn_vmh/ovirt-node-ng-4.2.7.1-0.20181216.0+1 / ext4 def
I don't know why I didn't think to get some more info regarding my storage
environment and post it here earlier. My gluster_vg1 volume is on /dev/sda1.
I can access the engine storage directory but I think that is because it is not
thin provisioned. I guess I was too bogged down in solving th
"lvs -a" does not list the logical volume I am missing.
"lvdisplay -m /dev/gluster_vg1-lvthinpool-tpool_tmeta" does not work either.
Error message is: Volume Group xxx not found. Cannot process volume group xxx."
I am trying to follow the procedure from
https://access.redhat.com/solutions/3251
You can view all LVs via 'lvs -a' and create a new metadata LV of bigger size.
Of course , lvdisplay -m /dev/gluster_vg1-lvthinpool-tpool_tmeta shoould also
work.
Best Regards,
Strahil NikolovOn Oct 1, 2019 03:11, jeremy_tourvi...@hotmail.com wrote:
>
> vgs displays everything EXCEPT gluster_v
vgs displays everything EXCEPT gluster_vg1
"dmsetup ls" does not list the VG in question. That is why I couldn't run the
lvchange command. They were not active or even detected by the system.
OK, I found my problem, and a solution:
https://access.redhat.com/solutions/3251681
# cd /var/log
# gr
What happens when it complain that there is no VGs ?
When you run 'vgs' what is the output?
Also, take a look into
https://www.redhat.com/archives/linux-lvm/2016-February/msg00012.html
I have the feeling that you need to disable all lvs - not only the thin pool,
but also the thin LVs (first)
Yes, I can take the downtime. Actually, I don't have any choice at the moment
because it is a single node setup. :) I think this is a distributed volume
from the research I have performed. I posted the lvchange command in my last
post, this was the result- I ran the command lvchange -an
/de
Can you suffer downtime ?
You can try something like this (I'm improvising):
Set to global maintenance (either via UI or hosted-engine --set-maintenance
--mode=global)
Stop the engine.
Stop ovirt-ha-agent ovirt-ha-broker vdsmd supervdsmd sanlock glusterd.
Stop all gluster processes via thr scri
Thank you for the reply. Please pardon my ignorance, I'm not very good with
GlusterFS. I don't think this is a replicated volume (though I could be wrong)
I built a single node hyperconverged hypervisor. I was reviewing my gdeploy
file from when I originally built the system. I have the fol
If it's a replicated volume - then you can safely rebuild your bricks and don't
even tryhto repair. There is no guarantee that the issue will not reoccur.
Best Regards,
Strahil NikolovOn Sep 29, 2019 00:22, jeremy_tourvi...@hotmail.com wrote:
>
> I see evidence that appears to be a problem with
18 matches
Mail list logo