On Thu, Nov 17, 2011 at 9:25 AM, Neil Bothwick <n...@digimed.co.uk> wrote: > On Thu, 17 Nov 2011 09:01:46 -0800, Mark Knecht wrote: > >> I'm pretty sure I've got the command set right to do the RAID-1 to >> RAID-5 conversion, but once it's done I believe the file system itself >> will still be 250GB so I'll need to resize the file system. In the >> past I've done this with gparted, which seems to work fine, but this >> time I was considering doing it at the command line. Does anyone know >> of a good web site that goes through how to do that? I've browsed >> around and found different pages that talk about it but my reading >> looks like they all have minor differences which leaves me a bit >> worried. > > Using cfdisk or fdisk, delete the partition and recreate it, USING THE > SAME START BLOCK at a larger size. > > Then "resize2fs /dev/sdwhatever" will resize the filesystem to fill the > partition. > > > > -- > Neil Bothwick
Really? Delete the partition? Sounds scary! (But actually makes sense. The data is still there.) I'm not sure how this works in the case of a RAID though. Here's the current partition table for sda where sda6, sdb6 & sdc6 are part of the RAID-1:: c2stable ~ # fdisk -l /dev/sda Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x8b45be24 Device Boot Start End Blocks Id System /dev/sda1 * 63 112454 56196 83 Linux /dev/sda2 112455 8514449 4200997+ 82 Linux swap / Solaris /dev/sda3 8594775 113467094 52436160 fd Linux raid autodetect /dev/sda4 113467095 976768064 431650485 5 Extended /dev/sda5 113467158 218339414 52436128+ fd Linux raid autodetect /dev/sda6 481933935 976768064 247417065 83 Linux /dev/sda7 218339478 481933871 131797197 fd Linux raid autodetect Partition table entries are not in disk order c2stable ~ # It's not that I want to change the partition size of the 3 pieces of the RAID-1, it's that after I convert the RAID-1 to RAID-5 I want it to be 500GB. I asked some questions on the Linux RAID list and putting together info from a couple of people here's how I'm thinking I proceed with the conversion: 1) First, fail one disk and clean it up for later: umount /dev/md6 mdadm --stop /dev/md6 mdadm /dev/md6 --fail /dev/sdc6 --remove /dev/sdc6 mdadm --zero-superblock /dev/sdc6 At this point the RAID-1 is still 3-drives but one is marked 'failed'. The failed drive is at this point like a new drive as it has no superblock. (I think...) 2) Now I convert the 3-drive RAID1 to a 2-drive RAID-1: mdadm --grow /dev/md6 --raid-devices=2 3) Create a 2-drive RAID-5: mdadm has an 'instantaneous' conversion of RAID-1 to RAID-5 for the 2-drive case because parity of a single drive is just the data itself. /dev/sdb6 is now 'parity' instead of 'data'. mdadm /dev/md6 --grow --level=5 4) Add a 3rd drive to the RAID-5: mdadm /dev/md6 --add /dev/sdc6 mdadm /dev/md6 --grow --raid-devices=3 At this point I was told: "Now, resize your filesystem to use the additional space." So, if at this point the end-block of sda6 isn't 976768064 but, let's say, 700000000 because mdadm set it to something new, then using your suggestion I guess I'd set it back to 976768064? I'm not comfortable however that if I do that that whatever is out there beyond 700000000 is really formatted as ext3 and 'empty' as I don't know what the mdadm conversion has done to it. Thanks, Mark