Re: [PLUG] filesystems question

2022-05-13 Thread David Bridges
I've forced this hundreds of times over the years working for a hosting provider where we used software raid / LVM on almost all of our servers. I've found the following commands quite helpful when in the situation like you describe. I'd usually be using a CentOS rescue environment mainly

Re: [PLUG] filesystems question

2022-05-13 Thread wes
On Fri, May 13, 2022 at 12:53 PM Ben Koenig wrote: > > I suspect this is a result of many years of different sysadmins replacing > > drives as they failed. we probably had the idea of eventually increasing > > the array's storage size once all the smaller drives were replaced with > > larger

Re: [PLUG] filesystems question

2022-05-13 Thread wes
On Fri, May 13, 2022 at 12:07 PM Robert Citek wrote: > > In contrast, if we parse the same information for md1, we see that it is > also made up of 5 devices, sdb2-sdf2, but of varying sizes: > > The RAID makes sense. The smallest partition size is 288.2 GB. And 684.1 > / 228.2 = 3 which is

Re: [PLUG] filesystems question

2022-05-13 Thread Robert Citek
Just to clarify, md0 seems to be working just fine. It's md1 that seems to be having an issue(s). And if md1 was partitioned or configured to be used in an LVM, mounting it won't work. If we parse the data from lsblk, we can see that md0 is made of 5 devices, sdb1-sdf1, which are all of the

Re: [PLUG] filesystems question

2022-05-13 Thread wes
On Fri, May 13, 2022 at 10:02 AM Ben Koenig wrote: > I might be channeling Captain Obvious here, but /dev/md0 is basically just > a block device. > I believe you are channeling a very different captain. > Sounds like you should identify the filesystem the same way you would for > a normal HDD

Re: [PLUG] filesystems question

2022-05-13 Thread wes
On Fri, May 13, 2022 at 8:04 AM Robert Citek wrote: > Admittedly, I haven't played with LVM in a while. But here's a nice > resource: > > > https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index > > this does look nice, but

Re: [PLUG] filesystems question

2022-05-13 Thread Robert Citek
Admittedly, I haven't played with LVM in a while. But here's a nice resource: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_and_managing_logical_volumes/index >From the output you posted, your md1 RAID6 looks like it's working fine, i.e. no failed

Re: [PLUG] filesystems question

2022-05-12 Thread wes
On Sat, Apr 30, 2022 at 3:16 PM wes wrote: > > md1 with 4 drives out of 6 is somewhat less clear that it could survive. > it did rebuild, so I can only assume that means it at least believes the > data is intact. I guess I'm looking for a way to validate or invalidate > this belief. >

Re: [PLUG] filesystems question

2022-04-30 Thread wes
# mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Thu Apr 11 18:48:11 2013 Raid Level : raid1 Array Size : 487104 (475.77 MiB 498.79 MB) Used Dev Size : 487104 (475.77 MiB 498.79 MB) Raid Devices : 6 Total Devices : 5 Persistence : Superblock is

Re: [PLUG] filesystems question

2022-04-30 Thread Robert Citek
Greetings, Wes. >From the information you provided, I’m guessing you built the RAID from five drives ( /dev/sd{f,g,h,i,j} ), which created a single /dev/mda device. That device was partitioned to create devices /dev/mda{1-10}. And those partitions were then used for the LVM. But that’s just a

[PLUG] filesystems question

2022-04-30 Thread wes
I have a raid6 array built with mdadm, 5 drives with a spare. it lost 2 drives, so I replaced them and the array rebuilt itself without much complaint. I believe the array contained an lvm pv (I didn't originally build it), but I can't seem to get it to detect now. on my other systems configured