Just to clarify, md0 seems to be working just fine.  It's md1 that seems to
be having an issue(s).  And if md1 was partitioned or configured to be used
in an LVM, mounting it won't work.

If we parse the data from lsblk, we can see that md0 is made of 5 devices,
sdb1-sdf1, which are all of the same size:

md0   9:0    0 475.7M  0 raid1 /mnt
md0   9:0    0 475.7M  0 raid1 /mnt
md0   9:0    0 475.7M  0 raid1 /mnt
md0   9:0    0 475.7M  0 raid1 /mnt
md0   9:0    0 475.7M  0 raid1 /mnt

sdb1    8:17   0   476M  0 part
sdc1    8:33   0   476M  0 part
sdd1    8:49   0   476M  0 part
sde1    8:65   0   476M  0 part
sdf1    8:81   0   476M  0 part

In contrast, if we parse the same information for md1, we see that it is
also made up of 5 devices, sdb2-sdf2, but of varying sizes:

md1   9:1    0 684.1G  0 raid6
md1   9:1    0 684.1G  0 raid6
md1   9:1    0 684.1G  0 raid6
md1   9:1    0 684.1G  0 raid6
md1   9:1    0 684.1G  0 raid6

sdb2    8:18   0 228.2G  0 part
sdc2    8:34   0 930.5G  0 part
sdd2    8:50   0 228.2G  0 part
sde2    8:66   0 464.8G  0 part
sdf2    8:82   0 279.4G  0 part

The RAID makes sense.  The smallest partition size is 288.2 GB.  And 684.1
/ 228.2 = 3 which is what you would expect for a five-drive RAID6 setup.
But the partitioning seems odd, given that the drives are the same size,
except sdc:

sdb       8:16   0 465.3G  0 disk
sdc       8:32   0   931G  0 disk
sdd       8:48   0 465.3G  0 disk
sde       8:64   0 465.3G  0 disk
sdf       8:80   0 465.3G  0 disk

The second partition takes up the remainder of the drive for only drives
sdc and sde.  The other partitions are half or less, with no other
partition using up the additional space.  Is that partitioning scheme
intended?

Regards,
- Robert


On Fri, May 13, 2022 at 12:05 PM wes <[email protected]> wrote:

> On Fri, May 13, 2022 at 10:02 AM Ben Koenig <[email protected]>
> wrote:
>
> > I might be channeling Captain Obvious here, but /dev/md0 is basically
> just
> > a block device.
> >
>
> I believe you are channeling a very different captain.
>
>
> > Sounds like you should identify the filesystem the same way you would for
> > a normal HDD partition. If this array was automounted then does it have
> an
> > entry in /etc/fstab?
> >
>
> /etc/fstab is on the inaccessible volume. the backup I have of it seems to
> be old or incomplete. or maybe mounting was handled by lvm for most of the
> volumes. I do, however, appear to have a backup of the contents of
> /etc/lvm, including lvm.conf, archive/, and backup/. this doesn't mean the
> current volume was definitely using lvm, but if it was, and maybe it was at
> some kind of offset, this could provide the info we need. now I will see if
> I can find any info on how to import a config file to lvm.
>
>
> > Does lsblk say anything useful? At some point it will need to be mounted
> > like any other partition.
> >
> >
> # lsblk
> NAME    MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
> sdb       8:16   0 465.3G  0 disk
> ââsdb1    8:17   0   476M  0 part
> â ââmd0   9:0    0 475.7M  0 raid1 /mnt
> ââsdb2    8:18   0 228.2G  0 part
>   ââmd1   9:1    0 684.1G  0 raid6
> sr0      11:0    1  1024M  0 rom
> sdc       8:32   0   931G  0 disk
> ââsdc1    8:33   0   476M  0 part
> â ââmd0   9:0    0 475.7M  0 raid1 /mnt
> ââsdc2    8:34   0 930.5G  0 part
>   ââmd1   9:1    0 684.1G  0 raid6
> sdd       8:48   0 465.3G  0 disk
> ââsdd1    8:49   0   476M  0 part
> â ââmd0   9:0    0 475.7M  0 raid1 /mnt
> ââsdd2    8:50   0 228.2G  0 part
>   ââmd1   9:1    0 684.1G  0 raid6
> sde       8:64   0 465.3G  0 disk
> ââsde1    8:65   0   476M  0 part
> â ââmd0   9:0    0 475.7M  0 raid1 /mnt
> ââsde2    8:66   0 464.8G  0 part
>   ââmd1   9:1    0 684.1G  0 raid6
> sda       8:0    0   931G  0 disk
> ââsda1    8:1    0   476M  0 part
> ââsda2    8:2    0 930.5G  0 part
> sdf       8:80   0 465.3G  0 disk
> ââsdf1    8:81   0   476M  0 part
> â ââmd0   9:0    0 475.7M  0 raid1 /mnt
> ââsdf2    8:82   0 279.4G  0 part
>   ââmd1   9:1    0 684.1G  0 raid6
> sdg       8:96   1  14.5G  0 disk
> ââsdg1    8:97   1  14.5G  0 part  /lib/live/mount/medium
> loop0     7:0    0 322.8M  1 loop
>  /lib/live/mount/rootfs/filesystem.squashfs
>
> dunno if there's anything helpful in there.
>
> Also you said this was a 2 drive failure so even if it failed to sync you
> > should still be able to mount it. If the array failed, the filesystem
> will
> > be completely FUBAR.
> >
>
> it definitely failed. but if it's going to be fubar, should it not have
> also failed to re-assemble and re-sync?
>
> I've repaired many failed (degraded?) arrays before by adding a new drive
> and re-syncing in a live boot environment. I'm trying to understand what
> went wrong on this particular outing.
>
>
> > You can probably just run mount on it and see what gets autodetected.
> >
>
> if file can't identify it, mount surely won't be able to either. however,
> just for fun:
>
> # mount /dev/md1 /mnt
> mount: block device /dev/md1 is write-protected, mounting read-only
> mount: you must specify the filesystem type
>
> -wes
>

Reply via email to