Looks like you might have overwrote the partition data somehow, or the
disk is failing and not representing the partition data correctly now.
I'm assuming sdb and sdc are the raid disks containing data, where sdc
has the valid and recognizable partition data? Meaning sdc has your
intact data, and sdb is the one missing the data? This is my conclusion
looking at this, meaning you need/want to mangle sdb, not sdc.
One thing you can do I've had success with is rebuilding the data
structure with fdisk (in mbr-based disks, ymmv with gpt), and accessing
the data assuming before/after aligns properly. Otherwise you can try to
use dd in surgery to replicate given block to block data around the
superblocks to recreate the disk structure, but I've read this
theoretically working.
I'd say recreate the sdb partitions, set to type fd for raid, zero the
new block with mdadm, and readd to the array to let mdadm sync the disks:
sudo mdadm --zero-superblock /dev/sdb2
sudo mdadm --manage /dev/md<> -a /dev/sdb2
Caveat emptor on the WD external disks - I've never had one last more
than a year (or any vendor's), and one WD enclosure wouldn't allow my pc
to pass the bios plugged in (moving the disk to another enclosure
worked, then died 6mo later). Vendors always use their worst mtbf disks
in those enclosures... spend a bit extra to get the "red" nas disks if
you must stay with WD as they're somewhat meant to be used with raid.
Hitachi went to hell when WD bought them (lost a 3tb external and a 4tb
internal within a month each), Seagate I won't trust since buying out
Maxtor's garbage (maxtor I've lost 5 of 6 raided disks within 6 months),
and frankly haven't found a vendor that makes a disk that lasts more
than a year anymore. Pretty sure it's by design as a profit center...
-mb
On 02/03/2014 02:26 PM, George Toft wrote:
Hi Michael,
lsblk does not show the third partition, but gdisk knows it's there -
see below. See also results when trying to mount the 3rd partition.
[root@localhost ~]# ls -l /mnt
total 6
drwxr-xr-x. 2 root root 4096 Feb 2 15:33 raid
drwxrwxrwx. 1 root root 2304 Feb 2 15:46 sdd1
[root@localhost ~]# mount --read-only /dev/sdc3 /mnt/raid
mount: you must specify the filesystem type
[root@localhost ~]# mount --read-only -t ext4 /dev/sdc3 /mnt/raid
mount: special device /dev/sdc3 does not exist
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 596.2G 0 disk
├─sda1 8:1 0 500M 0 part /boot
└─sda2 8:2 0 595.7G 0 part
├─VolGroup-lv_root (dm-0) 253:0 0 50G 0 lvm /
├─VolGroup-lv_swap (dm-1) 253:1 0 7.8G 0 lvm [SWAP]
└─VolGroup-lv_home (dm-2) 253:2 0 537.9G 0 lvm /home
sdb 8:16 0 2.7T 0 disk
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 200M 0 part
└─sdc2 8:34 0 2G 0 part
sr0 11:0 1 200M 0 rom
sdd 8:48 0 3.7T 0 disk
└─sdd1 8:49 0 3.7T 0 part /mnt/sdd1
[root@localhost ~]# gdisk /dev/sdc
GPT fdisk (gdisk) version 0.8.8
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Warning! Secondary partition table overlaps the last partition by
4294968498 blocks!
You will need to delete this partition or resize it in another utility.
Command (? for help): p
Disk /dev/sdc: 5860531055 sectors, 2.7 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): BC837200-8528-4F8C-A78B-C529DA2B56CB
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 1565563725
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)
Number Start (sector) End (sector) Size Code Name
1 2048 411647 200.0 MiB EF00
2 411648 4605951 2.0 GiB 8200
3 4605952 5860532223 2.7 TiB FD00
Command (? for help):
Maybe this is as simple as getting the Linux to see the 3rd
partition? I have another email in the works, but I'm waiting for the
3TB to dd to another drive . . .
Regards,
George Toft
On 2/3/2014 12:23 AM, Michael Butash wrote:
The only time I've used gpt with linux was with a efi-boot-only
laptop, but prior I can raid the boot sector drive still with
software and not have to use fakeraid at all for full partition
redundancy. Still kind of a new concept for a lot of people I think.
Ubuntu otherwise happily still uses mbr, so was a bit of a curve for
me to have to adapt as they don't bake their gpt or raid tools well
in the initrd or install.
If you raided your /boot and *other* raid volume, I'd say just redo
the partitions with gdisk and resync the raid which is pretty easy (I
have to do this somewhat commonly with my ssd's). I can run swap and
root from lvm on the raid otherwise for full redundancy and easy disk
rebuilds if/when needed. That keeps failure recovery very easy. Only
EFI complicates this with crappy non-raidable fat32 partitions needed
now (eww, thanks microsoft).
My gpt/efi laptop looks much the same with dual ssd's, but has the
first partition as an identical fat32 partition on each to satiate
ubuntu as /boot/EFI and /bootEFI1, plus a mdraided /boot second, and
crypt volume third. If not adding encryption, lvm atop the mdraid pv
for a lot more flexibility in volume/redundancy restoration among
disks. I just rsync the stupid efi fat32 disks.
mb@host:~$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sdh 8:112 0 111.8G 0 disk
├─sdh1 8:113 0 100M 0 part
│ └─md127 9:127 0 100M 0 raid1 /boot
└─sdh2 8:114 0 111.7G 0 part
└─md126 9:126 0 111.7G 0 raid1
└─spv0 (dm-0) 252:0 0 111.7G 0 crypt
├─vg0-root (dm-1) 252:1 0 2G 0 lvm /
├─vg0-swap (dm-2) 252:2 0 2G 0 lvm [SWAP]
├─vg0-var (dm-3) 252:3 0 2.5G 0 lvm /var
├─vg0-usr (dm-4) 252:4 0 10G 0 lvm /usr
├─vg0-home (dm-5) 252:5 0 32G 0 lvm /home
sdi 8:128 0 111.8G 0 disk
├─sdi1 8:129 0 100M 0 part
│ └─md127 9:127 0 100M 0 raid1 /boot
└─sdi2 8:130 0 111.7G 0 part
└─md126 9:126 0 111.7G 0 raid1
└─spv0 (dm-0) 252:0 0 111.7G 0 crypt
├─vg0-root (dm-1) 252:1 0 2G 0 lvm /
├─vg0-swap (dm-2) 252:2 0 2G 0 lvm [SWAP]
├─vg0-var (dm-3) 252:3 0 2.5G 0 lvm /var
├─vg0-usr (dm-4) 252:4 0 10G 0 lvm /usr
├─vg0-home (dm-5) 252:5 0 32G 0 lvm /home
-mb
On 02/02/2014 08:44 PM, George Toft wrote:
installed gdisk and it looks like /dev/sdb is damaged, but /dev/sdc
is good :) doing a dd on the whole drive to a file on another drive
so I have a backup. I'll check back in a couple days when it's done.
Regards,
George Toft
On 2/2/2014 2:58 PM, Matt Graham wrote:
# fdisk -l | egrep "GPT|dev"
WARNING: fdisk doesn't support GPT.
/dev/sdb1 1 267350 2147483647+ ee GPT
# mdadm --assemble --run /dev/md0 /dev/sdb1
mdadm: cannot open device /dev/sdb1: No such
file or directory
This is an odd message to get, and probably means that udev didn't
find the device and create it because udev and/or the rescue
system's GPT support is flaking out. Does the kernel in this rescue
system support GPT? "mknod /dev/sdb1 b 8 17" to create it. You may
wish to "mknod /dev/sdc1 8 33" in case the other softRAID-1 disk
has better stuff on it.
As other people have said, there should be no need to use mdadm to
assemble an array out of RAID-1 partitions. "mount /dev/sdb1
/mnt/somewhere" should do something useful if the device node and
/mnt/somewhere exist.
On 2014-02-02 12:57, Michael Butash wrote:
Use gdisk if/when doing gpt
That too. (One day, we will forsake our filesystems and break all
bonds of block devices to get a disk larger than 2T for actual
experience with GPT, but today is *not* this day. This day, we
*SOLVE TECH PROBLEMS!!!1!*)
---------------------------------------------------
PLUG-discuss mailing list - [email protected]
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss
---------------------------------------------------
PLUG-discuss mailing list - [email protected]
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss
---------------------------------------------------
PLUG-discuss mailing list - [email protected]
To subscribe, unsubscribe, or to change your mail settings:
http://lists.phxlinux.org/mailman/listinfo/plug-discuss