Admittedly, I haven't played with LVM in a while. But here's a nice resource:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index >From the output you posted, your md1 RAID6 looks like it's working fine, i.e. no failed devices of the five that make up the RAID. However, from the pvdisplay command, it does not look like you have any Physical Volumes as part of an LVM. And without that, you have no LVM. It's possible that you may have to "activate" the PVs or something to that effect, but my memory is fuzzy on that. However, before playing around with LVM, I'd strongly recommend making a backup image of /dev/md1 if your data is important and you don't already have a backup. If the data is not important, have at it and ignore the rest of this message. The RAID is less than 1TB and USB 1TB drives are relatively inexpensive, i.e. < $100. Just plug one in and image it with dd. For example, # dd if=/dev/md1 of=/mnt/USB/raid6.img bs=1M status=progress I'd also recommend a checksum: # dd if=/dev/md1 bs=1M | md5sum # md5sum /mnt/USB/raid6.img If they aren't the same, redo the image. I like the 3-2-1 rule for backups. https://www.seagate.com/solutions/backup/what-is-a-3-2-1-backup-strategy/ So, once you have the image on the USB drive, copy it someplace else, e.g. another drive or to the cloud. In fact, once the data is in the cloud, e.g. AWS EC2, I would play with the data and LVM in the cloud. It's just so easy and inexpensive to have a few TB drives lying around on beefy systems to copy data, try things out, wipe things clean and try again if you mess up, and blow everything away to stop the meter once you have notes for a working solution. BTW, the above commands are off the top of my head and I have not verified them. Or use whatever backup tool you are most comfortable with. Good luck and let us know how things go. Regards, - Robert On Sat, Apr 30, 2022 at 4:16 PM wes <[email protected]> wrote: > # mdadm --detail /dev/md0 > /dev/md0: > Version : 1.2 > Creation Time : Thu Apr 11 18:48:11 2013 > Raid Level : raid1 > Array Size : 487104 (475.77 MiB 498.79 MB) > Used Dev Size : 487104 (475.77 MiB 498.79 MB) > Raid Devices : 6 > Total Devices : 5 > Persistence : Superblock is persistent > > Update Time : Sat Apr 16 05:15:16 2022 > State : clean, degraded > Active Devices : 5 > Working Devices : 5 > Failed Devices : 0 > Spare Devices : 0 > > Name : tatooine:0 > UUID : a3cf1d73:5a14862c:8affcca7:036adaa0 > Events : 16482 > Number Major Minor RaidDevice State > 6 8 33 0 active sync /dev/sdc1 > 1 8 17 1 active sync /dev/sdb1 > 7 8 65 2 active sync /dev/sde1 > 3 8 49 3 active sync /dev/sdd1 > 8 8 81 4 active sync /dev/sdf1 > 5 0 0 5 removed > > > # mdadm --detail /dev/md1 > /dev/md1: > Version : 1.2 > Creation Time : Sun Apr 3 03:36:58 2022 > Raid Level : raid6 > Array Size : 717376512 (684.14 GiB 734.59 GB) > Used Dev Size : 239125504 (228.05 GiB 244.86 GB) > Raid Devices : 5 > Total Devices : 5 > Persistence : Superblock is persistent > > Intent Bitmap : Internal > > Update Time : Fri Apr 22 21:47:14 2022 > State : active > Active Devices : 5 > Working Devices : 5 > Failed Devices : 0 > Spare Devices : 0 > Layout : left-symmetric > Chunk Size : 512K > > Name : tatooine:1 > UUID : 748c2cdc:113ecda4:8a52c229:384d3438 > Events : 2294 > > Number Major Minor RaidDevice State > 0 8 18 0 active sync /dev/sdb2 > 1 8 50 1 active sync /dev/sdd2 > 2 8 66 2 active sync /dev/sde2 > 3 8 82 3 active sync /dev/sdf2 > 4 8 34 4 active sync /dev/sdc2 > > > # pvdisplay > # lvdisplay > No volume groups found > # vgdisplay > No volume groups found > > > > obviously, md0 works just fine, being a raid1 with at least 1 drive > surviving. > > md1 with 4 drives out of 6 is somewhat less clear that it could survive. it > did rebuild, so I can only assume that means it at least believes the data > is intact. I guess I'm looking for a way to validate or invalidate this > belief. > > -wes > > On Sat, Apr 30, 2022 at 2:57 PM Robert Citek <[email protected]> > wrote: > > > Greetings, Wes. > > > > From the information you provided, I’m guessing you built the RAID from > > five drives ( /dev/sd{f,g,h,i,j} ), which created a single /dev/mda > device. > > That device was partitioned to create devices /dev/mda{1-10}. And those > > partitions were then used for the LVM. > > > > But that’s just a guess. > > > > What are the five drives named? Are they really entire drives or > > partitions on a drive? e.g. /dev/sda vs /dev/sda7. > > > > What is the output from the mdadm command that shows you the RAID’s > > configuration? It will show the devices used, how it’s put together, and > > it’s current state. ( the actual command syntax with options slips my > mind > > at the moment. ) > > > > What is the output of the various LVM display commands? pvdisplay, > > vgdisplay, lvdisplay > > > > Regards, > > - Robert > > > > On Sat, Apr 30, 2022 at 1:09 PM wes <[email protected]> wrote: > > > > > I have a raid6 array built with mdadm, 5 drives with a spare. it lost 2 > > > drives, so I replaced them and the array rebuilt itself without much > > > complaint. I believe the array contained an lvm pv (I didn't originally > > > build it), but I can't seem to get it to detect now. on my other > systems > > > configured the same way, the first characters on the block device are > > > "LABELONE LVM2" - not so on this broken system. > > > > > > running strings on the broken volume returns what appear to be > filenames: > > > > > > [jop61.gz > > > [jop61.gz > > > Ftof~.1.gz > > > [ehzqp.1.gz > > > Fwteh.1.gz > > > Fwteh.1.gz > > > utvame.1.gz > > > utvame.1.gz > > > > > > so clearly there is _something_ there but I can't figure out how to > tell > > > what it is. > > > > > > # file -s /dev/md1 > > > /dev/md1: sticky data > > > > > > any ideas on things to check or try? this is not a critical system so > > this > > > is mostly an academic exercise. I would like to understand more about > > this > > > area of system administration. > > > > > > thanks, > > > -wes > > > > > >
