Re: pvcreate to raid1 fails
I just want to add the response to my bugzilla submission. Note that there is an LVM2 as well as kernel issue. Both are fixed, but neither is in the latest Fedora 12 updates. When I have some time I will install LVM2 2.02.62 either by building from source or binary install from the repo. I may wait until Fedora 13 to try it again possibly by installing the beta. -- I believe there is a combination of failures here. 1) LVM2 had an issue where a pvcreate would fail if the underlying device (e.g. /dev/md1) was misaligned (aka: alignment_offset=-1). This has since been fixed and is available in LVM2 2.02.62, see: http://sources.redhat.com/git/gitweb.cgi?p=lvm2.git;a=commit;h=8cb8f65010c (NOTE: please verify that /sys/block/md1/alignment_offset is -1) 2) The kernel (2.6.32.9-70.fc12.x86_64) does not contain the latest upstream fixes that were made to blk_stack_limits(). Without these fixes the stacking of limits (by MD) is prone to failure (resulting in alignment_offset=-1), see: http://git.kernel.org/linus/81744ee44ab284 http://git.kernel.org/linus/fe0b393f2c0a0d --- On 03/01/2010 05:48 PM, Jerry Feldman wrote: I am migrating my system to a raid 1. Given: Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0003a96c Device Boot Start End Blocks Id System /dev/sdc1 1 25 200781 fd Linux raid autodetect /dev/sdc2 26 121601 976559220 fd Linux raid autodetect ARRAY /dev/md0 UUID=4420140e:d8a9d5b5:91e28e81:d7bbf71d ARRAY /dev/md1 UUID=05a09bd8:f968bb0e:91e28e81:d7bbf71d [r...@gaf gaf]# pvcreate -v -f /dev/md1 /dev/md1: pe_align (128 sectors) must not be less than pe_align_offset (36028797018963967 sectors) /dev/md1: Format-specific setup of physical volume failed. Failed to setup physical volume /dev/md1 Nore that /dev/md1 is formatted ext4. Note that md0 contains 2 devices /dev/sda1 /dev/sdc1, but /dev/md1 contains only /dev/sdc2 What I am trying to do is to migrate my system (LVM on /dev/sda2) to raid1. -- Jerry Feldmang...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/06/2010 06:55 PM, Benjamin Scott wrote: On Sat, Mar 6, 2010 at 2:15 PM, Jerry Feldmang...@blu.org wrote: First I booted into Knoppix 6.2.1, started raid1, pvcreate failed, stopped mdadm and did a pvcreate on /dev/sdc2. Booted back into Fedora, and I have successfully done a vgextend to /dev/md1 ... Ummm... I'm not sure, but I think that will end badly. Both md and LVM have superblocks (or some other on-disk metadata structure). That has to use space. So md1 is going to be smaller than sdc2. By creating the PV on sdc2 (without going through the md layer), LVM is unaware of the md superblock. I think LVM puts the PV superblock at the start of the device, and md puts its superblock at the end. So everything will appear to work until LVM happens to write to the end of md1, at which point it will (at best) get an I/O past end of device sort of error, or (at worst) overwrite the md superblock. The only reason it appears to work at all is that md1 is RAID1 with one only member, so you can write anything to the non-superblock areas and the md layer will just pass that back up the storage stack. So LVM has no way of knowing that the PV superblock it reads from the start of md1 is not an equivalent device to what it was written to (sdc2). I could be wrong, but that's my take. pvck may or may not detect this. In any case, I decided to install from scratch. And after allocating the RAID1 volumes and LVMs underneath them, when Fedora (anaconda) went to write to disk it failed. I then redid it cutting the sizes in half so each RAID1 volume was under 500GB. Same error. I then installed SuSE 11.2. It had no problem with the RAID1 and LVM, but at some point it had errors with mounting /dev/md0. I also noted that the partitions were not on cylinder boundaries. I then tried Ubuntu 9.10 but there was no way RAID1 could be set up through the installer. In any case, I then used fdisk to set my partitions on Cylinder boundaries and simply installed Fedora 12 with LVM. The next step is to check bugzilla and and possibly file a problem report, and I'll repeat this exercise after a while. A couple of things I could try without messing up the system is pvcreate with different extent sizes. -- Jerry Feldmang...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
I've finally done a successful pvcreate. First I booted into Knoppix 6.2.1, started raid1, pvcreate failed, stopped mdadm and did a pvcreate on /dev/sdc2. Booted back into Fedora, and I have successfully done a vgextend to /dev/md1, and am now moving the extents over. Once I have everything moved, and the fstab set up correctly, before I add /dev/sda2 to the raid1, I will remove /dev/sda1 from /dev/md0, and repartition /dev/sda on cylinder boundaries. The messy step is rebuilding the boot partition, but I've done that many times before. On 03/04/2010 11:23 AM, Benjamin Scott wrote: On Thu, Mar 4, 2010 at 7:34 AM, Jerry Feldman g...@blu.org wrote: On 03/03/2010 09:53 PM, Benjamin Scott wrote: md1 (currently only on sdc2) is bigger than sda2. You will not be able to mirror md1 back on to sda2 without repartitioning sda, which will mean removing sda1 from md0. That is intentional. /dev/SDA is a Seagate and /dev/sdc is a WD. Interesting. According to the fdisk output, they're identical in size, down to the block. Usually I don't get that lucky, I've found. :) This way I reduce the chance of a simultaneous failure, like we had at the BLU last year where all of our drives were the same MFR and lot #. Hmmm, that's a good idea. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/03/2010 09:53 PM, Benjamin Scott wrote: Your two disks (eventually to be mirrored) are identical in size, but the partition tables are different. That is okay, but it may confuse people and/or software. For example: md1 (currently only on sdc2) is bigger than sda2. You will not be able to mirror md1 back on to sda2 without repartitioning sda, which will mean removing sda1 from md0. That is intentional. /dev/SDA is a Seagate and /dev/sdc is a WD. This way I reduce the chance of a simultaneous failure, like we had at the BLU last year where all of our drives were the same MFR and lot #. Actually, somehow Fedora partitioner set up /dev/sda1 to end not on a cylinder boundary. I was unable to set up /dev/sdc1 exactly the same. If I start from scratch, they will be the same exact size. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/03/2010 09:53 PM, Benjamin Scott wrote: Neat. Whoever wrote that code split the error message across multiple adjacent C literal strings. Sometimes I wonder if programmers are deliberately making our lives harder. You should have seen how we did it in lint on Tru64. Part of it used a message catalog, part used its own internal message array. Basically, when I write code I try to keep my lines under 80 columns for readability, but I also don't like to break up my messages that way either. That is why I shortened the search. Is the incredibly-large-number different for different runs of the program? (If so, it's prolly an uninitialized variable; if not, it's prolly broken program logic doing something consistently non-sensible. Not that that helps us much.) I note that you're running x86-64. I wonder if it's programmer brain damage, assuming that all integers are 32 bits wide. I think it it more of an uninitialized variable issue, but I did not trace the code back that far. My time is a bit crunched this week. BTW: I appreciate the insight on this. IMHO, this is a bug in the LVM2 library, and I may file a bugzilla on it when I can grab more info. I'm brain damaged by the Digital compiler guys who used to beat the Hell out of me in the ZKO caf when I would file a bug report, especially Ed Vogel :-) -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On Wed, Mar 3, 2010 at 9:53 PM, Benjamin Scott dragonh...@gmail.com wrote: On Wed, Mar 3, 2010 at 9:23 AM, Jerry Feldman g...@blu.org wrote: I am running Fedora 12 with kernel 2.6.31.12-174.2.3.fc12.x86_64 LVM is lvm2-2.02.53-2.fc12.x86_64 Does Fedora have an updates for the kernel or LVM (or device mapper, etc.) to install? I've posted the details at: http://pastebin.com/4AtMzEjr According to the output of fdisk -l, the end of sda1 and the start of sda2 both occur within cylinder 26. This may or may not be a problem. Can you post fdisk -lu /dev/sda output so we can see exact sector layout? I want to make sure the partitions do not overlap. According to IBM/Microsoft, partitions start and end on cylinder boundaries. If you ever use any OS or software which assumes IBM/Microsoft semantics, that may cause data loss, since as far as such software sees things, your partitions overlap. And IBM/Microsoft did define the pee sea partition table format... The cylinder boundary issue isn't supposed to matter to Linux (as long as sectors don't overlap), but partitioning in the pea sea is such a crock that it still has me worried. FWIW, BSD ( Solaris) also uses cylinder boundries. 8 slices per disk (0-7). Slice 2 is the full disk cylinders 0-N and shouldn't be used for anything. Overlapping cylinders will lead to data loss eventually and newer versions of Solaris prevent format (not fdisk) from doing that. I don't remember about the BSD systems. Older BSD based ones did nothing to prevent self LARTing. ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/04/2010 09:50 AM, Tom Buskey wrote: FWIW, BSD ( Solaris) also uses cylinder boundries. 8 slices per disk (0-7). Slice 2 is the full disk cylinders 0-N and shouldn't be used for anything. Overlapping cylinders will lead to data loss eventually and newer versions of Solaris prevent format (not fdisk) from doing that. I don't remember about the BSD systems. Older BSD based ones did nothing to prevent self LARTing. As I mentioned, I was not aware that Anaconda (gparted) had allocated the boot partition end end in the middle of a cylinder. Since virtually everything on my system is backed up I think that rather than fighting with what I perceive as a bug in LVM2, I will build my system from scratch. This will fix the cylinder alignment (which is not really an issue at this point). -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On Thu, Mar 4, 2010 at 7:34 AM, Jerry Feldman g...@blu.org wrote: On 03/03/2010 09:53 PM, Benjamin Scott wrote: md1 (currently only on sdc2) is bigger than sda2. You will not be able to mirror md1 back on to sda2 without repartitioning sda, which will mean removing sda1 from md0. That is intentional. /dev/SDA is a Seagate and /dev/sdc is a WD. Interesting. According to the fdisk output, they're identical in size, down to the block. Usually I don't get that lucky, I've found. :) This way I reduce the chance of a simultaneous failure, like we had at the BLU last year where all of our drives were the same MFR and lot #. Hmmm, that's a good idea. Actually, somehow Fedora partitioner set up /dev/sda1 to end not on a cylinder boundary. Good job Fedora. See previous about PC partitioning being a crock. ;-) If I start from scratch, they will be the same exact size. One option would be: A1. Get the unable-to-create-PV problem fixed first, of course A2. Move the LVM PEs (Physical Extents) currently on sda2 to sdc2 (as described previously) A3. Remove sda2 from the VG (as described previously) A4. Remove sda1 from md0 (breaking the mirror), leaving just sdc1 in the mirror A5. Nuke the partition table on sda A6. Copy the partition table from sdc to sda, so the two disks have an identical partition layout A7. Re-mirror md0 and md1 on to the new sda1 and sda2 (respectively) A8. Reinstall the boot loader on to md0 and/or sda To nuke a partition table (WARNING: destroys data!): dd if=/dev/zero of=/dev/sda bs=512 count=1 sfdisk -R /dev/sda (WARNING: The above destroys data!) Then you can copy with: sfdisk -d /dev/sdc | sed s/sdc/sda/ | sfdisk /dev/sda On Thu, Mar 4, 2010 at 11:09 AM, Jerry Feldman g...@blu.org wrote: Since virtually everything on my system is backed up I think that rather than fighting with what I perceive as a bug in LVM2, I will build my system from scratch. That's a viable option, too. :-) -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/04/2010 11:23 AM, Benjamin Scott wrote: On Thu, Mar 4, 2010 at 7:34 AM, Jerry Feldman g...@blu.org wrote: On 03/03/2010 09:53 PM, Benjamin Scott wrote: md1 (currently only on sdc2) is bigger than sda2. You will not be able to mirror md1 back on to sda2 without repartitioning sda, which will mean removing sda1 from md0. That is intentional. /dev/SDA is a Seagate and /dev/sdc is a WD. Interesting. According to the fdisk output, they're identical in size, down to the block. Usually I don't get that lucky, I've found. :) This way I reduce the chance of a simultaneous failure, like we had at the BLU last year where all of our drives were the same MFR and lot #. Hmmm, that's a good idea. Actually, somehow Fedora partitioner set up /dev/sda1 to end not on a cylinder boundary. Good job Fedora. See previous about PC partitioning being a crock. ;-) If I start from scratch, they will be the same exact size. One option would be: A1. Get the unable-to-create-PV problem fixed first, of course A2. Move the LVM PEs (Physical Extents) currently on sda2 to sdc2 (as described previously) A3. Remove sda2 from the VG (as described previously) A4. Remove sda1 from md0 (breaking the mirror), leaving just sdc1 in the mirror A5. Nuke the partition table on sda A6. Copy the partition table from sdc to sda, so the two disks have an identical partition layout A7. Re-mirror md0 and md1 on to the new sda1 and sda2 (respectively) A8. Reinstall the boot loader on to md0 and/or sda To nuke a partition table (WARNING: destroys data!): dd if=/dev/zero of=/dev/sda bs=512 count=1 sfdisk -R /dev/sda (WARNING: The above destroys data!) Then you can copy with: sfdisk -d /dev/sdc | sed s/sdc/sda/ | sfdisk /dev/sda On Thu, Mar 4, 2010 at 11:09 AM, Jerry Feldman g...@blu.org wrote: Since virtually everything on my system is backed up I think that rather than fighting with what I perceive as a bug in LVM2, I will build my system from scratch. That's a viable option, too. :-) Getting the unable-to-create-PV problem fixed first, is the crux of the situation. I'm wondering if I can boot from a live cd and do that step from something like gparted or Knoppix. In any case, I have a few tasks in front of me in the next few weeks: 1. This (eg. set up my system as a RAID1). 2. Move from a Blackberry Curve to a Motorola Backflip Android -the main issue is to copy the Blackberry Memo Pad (there are a few ways to do this) over to an Android notepad program, such as Evernote, AKNotes, and many more. 3. pay taxes - I live in MA. 4. Buy my daughter a Macbook Pro for her birthday - her cat fried her existing one. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On Thu, Mar 4, 2010 at 2:34 PM, Jerry Feldman g...@blu.org wrote: I'm wondering if I can boot from a live cd and do that step from something like gparted or Knoppix. Oh, that's a good point! pvcreate doesn't do anything in the LVM database or Volume Group; it essentially just does some sanity checks and then sets up metadata structures on the PV. You could be able to do that from any working Linux system, and then reboot back into Fedora and run vgextend to add the PV to the VG. 3. pay taxes - I live in MA. It ain't that far off for the rest of the country, either. rueful grin 4. Buy my daughter a Macbook Pro for her birthday - her cat fried her existing one. Wow. That's an expensive cat. :-( -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/04/2010 06:09 PM, Benjamin Scott wrote: Wow. That's an expensive cat. :-( True -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
Sure, I'll paste a full dump later on - not to the listserv. I am running Fedora 12 with kernel 2.6.31.12-174.2.3.fc12.x86_64 Processor is an AMD Opteron Quad Core. 6GB memory. LVM is lvm2-2.02.53-2.fc12.x86_64 The error message was copied and pasted: [...@gaf ~]$ sudo pvcreate /dev/md1 /dev/md1: pe_align (128 sectors) must not be less than pe_align_offset (36028797018963967 sectors) /dev/md1: Format-specific setup of physical volume failed. Failed to setup physical volume /dev/md1 I've posted the details at: http://pastebin.com/4AtMzEjr Note that my version of lvm2 is a bit behind the sources that you reference. Essentially, I back up my entire system using rsnapshot so I could simply start from scratch and restore my backed up file systems and allocate the raid1 volumes from scratch which is a backup plan, but I would prefer not to do that and try to solve the problem. On 03/03/2010 12:59 AM, Benjamin Scott wrote: On Tue, Mar 2, 2010 at 7:51 AM, Jerry Feldman g...@blu.org wrote: The difference between the way I did it an your suggestion is I had missing before the /dev/sdc2. I'm pretty sure that doesn't matter. Going back to the OP: On Mon, Mar 1, 2010 at 5:48 PM, Jerry Feldman g...@blu.org wrote: [r...@gaf gaf]# pvcreate -v -f /dev/md1 /dev/md1: pe_align (128 sectors) must not be less than pe_align_offset $ wget --quiet ftp://sources.redhat.com/pub/lvm2/LVM2.2.02.61.tgz $ tar -xzf LVM2.2.02.61.tgz $ grep -lr must not be less than LVM2.2.02.61 $ What OS/distribution, release, and LVM version are you running? Are you sure that you have that error message transcribed correctly? :) Let's do the infodump drill. Post the output of: fdisk -l /dev/sda cat /proc/partitions mdadm --detail /dev/md0 mdadm --detail /dev/md1 mdadm --examine /dev/sda1 mdadm --examine /dev/sdc1 mdadm --examine /dev/sdc2 pvs vgs lvs If you prefer, you may want to use http://pastebin.com/ or similar rather than dumping it all into an email. You may also want to explore the following commands: lvmdiskscan pvscan pvck vgck -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/ -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/03/2010 12:59 AM, Benjamin Scott wrote: On Tue, Mar 2, 2010 at 7:51 AM, Jerry Feldman g...@blu.org wrote: The difference between the way I did it an your suggestion is I had missing before the /dev/sdc2. I'm pretty sure that doesn't matter. Going back to the OP: On Mon, Mar 1, 2010 at 5:48 PM, Jerry Feldman g...@blu.org wrote: [r...@gaf gaf]# pvcreate -v -f /dev/md1 /dev/md1: pe_align (128 sectors) must not be less than pe_align_offset $ wget --quiet ftp://sources.redhat.com/pub/lvm2/LVM2.2.02.61.tgz $ tar -xzf LVM2.2.02.61.tgz $ grep -lr must not be less than LVM2.2.02.61 $ Just a bit more. The message comes from LVM2.2.02.61/lib/format_text/format-text.c Rather than reitereating the exact code, there are a number of FixMe comments both at the start of the function, _text_pv_setup line 1794. Based on the message: /dev/md1: pe_align (128 sectors) must not be less than pe_align_offset (36028797018963967 sectors) I would suspect that possibly pe_align_offset may either not be initialized properly or is picking up incorrect information since /dev/sdc2 starts at cylinder 26. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On Wed, Mar 3, 2010 at 9:23 AM, Jerry Feldman g...@blu.org wrote: I am running Fedora 12 with kernel 2.6.31.12-174.2.3.fc12.x86_64 LVM is lvm2-2.02.53-2.fc12.x86_64 Does Fedora have an updates for the kernel or LVM (or device mapper, etc.) to install? I've posted the details at: http://pastebin.com/4AtMzEjr Your two disks (eventually to be mirrored) are identical in size, but the partition tables are different. That is okay, but it may confuse people and/or software. For example: md1 (currently only on sdc2) is bigger than sda2. You will not be able to mirror md1 back on to sda2 without repartitioning sda, which will mean removing sda1 from md0. According to the output of fdisk -l, the end of sda1 and the start of sda2 both occur within cylinder 26. This may or may not be a problem. Can you post fdisk -lu /dev/sda output so we can see exact sector layout? I want to make sure the partitions do not overlap. According to IBM/Microsoft, partitions start and end on cylinder boundaries. If you ever use any OS or software which assumes IBM/Microsoft semantics, that may cause data loss, since as far as such software sees things, your partitions overlap. And IBM/Microsoft did define the pee sea partition table format... The cylinder boundary issue isn't supposed to matter to Linux (as long as sectors don't overlap), but partitioning in the pea sea is such a crock that it still has me worried. You may also want to compare the output of fdisk with the output of some other partitioning programs, just to see what other code thinks of your partition table. Sometimes different implementations will disagree. (See above about crock.) Try sfdisk -l. You can modify that with -uS to report sectors, -uB for blocks, or -uC for cylinders. Also try parted; the print command, optionally after various units commands. Other than that, things look okay. The kernel sees your partitions as they are defined on disk, RAID is reporting sensible information, LVM doesn't appear to think sdc2 is already in a VG or anything dumb like that. On Wed, Mar 3, 2010 at 10:21 AM, Jerry Feldman g...@blu.org wrote: $ grep -lr must not be less than LVM2.2.02.61 The message comes from LVM2.2.02.61/lib/format_text/format-text.c Neat. Whoever wrote that code split the error message across multiple adjacent C literal strings. Sometimes I wonder if programmers are deliberately making our lives harder. BTW, good catch. How'd you find that? :) I would suspect that possibly pe_align_offset may either not be initialized properly ... Is the incredibly-large-number different for different runs of the program? (If so, it's prolly an uninitialized variable; if not, it's prolly broken program logic doing something consistently non-sensible. Not that that helps us much.) I note that you're running x86-64. I wonder if it's programmer brain damage, assuming that all integers are 32 bits wide. ... or is picking up incorrect information since /dev/sdc2 starts at cylinder 26. That *should* be okay, because sdc1 ends on cylinder 25.. -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On 03/01/2010 07:41 PM, Benjamin Scott wrote: On Mon, Mar 1, 2010 at 5:48 PM, Jerry Feldman g...@blu.org wrote: pe_align (128 sectors) must not be less than pe_align_offset (36028797018963967 sectors) By my calculations, the second number of sectors works out to 16 million terabytes, which makes me suspect the diagnostic itself is broken, or being fed broken data. Nore that /dev/md1 is formatted ext4. Are you trying to import an ext4 filesystem into LVM in place? If so, that won't work. You have to create a new (empty) LV (Logical Volume) and copy the filesystem into the LV. You can do the copy in any number of ways, but mounting both and doing cp -a generally works. If you just want to use /dev/md1 as a new empty PV (Physical Volume), then try erasing the filesystem superblock on /dev/md1 so LVM thinks it is blank. WARNING: The following command will destroy all data on the filesystem! dd if=/dev/zero of=/dev/md1 bs=512 count=1 WARNING: The preceding command will destroy all data on the filesystem! (I know you know what dd if=/dev/zero does; the warnings are for other readers. (Who also don't care about data remanence.)) You can't do an in-place import of a filesystem to an LV because LVM LVs are structured entities which are not always linear, and LVM PVs contain metadata at the start (and maybe also the end), neither of which are how filesystems see block devices. There are probabbly cases where it would be doable in theory, but I don't think anyone's bothered with code. What I am trying to do is to migrate my system (LVM on /dev/sda2) to raid1. I'm not sure if I have the whole picture. Let me pitch a scenario: Existing system is single disk, no RAID sda = Existing disk sda1 = Small boot partition sda2 = Large LVM PV LVM volume group name VolGroup00 One or more filesystems and/or swap spaces in LVM LVs Goal: Migrate system to RAID 1 in-place # make a full backup of everything to offline media # test the backup # add new new disk as /dev/sdc # set partitions on sdc to be same or larger than sda # create degraded mirrors on new disk mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 missing mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc2 missing # initialize md1 as an LVM PV pvcreate /dev/md1 # add new the PV to the existing VG (Volume Group) vgextend VolGroup00 /dev/md1 # make sure that backup is still there # move data to new PV pvmove /dev/sda2 /dev/md1 # get coffee, make sandwich, read book, etc. # remove the now unused old PV from old disk vgreduce VolGroup00 /dev/sda2 # (things get vague now, 'cause I don't want to type everything) # (if anyone wants help on something, say so) # migrate boot partition to /dev/md0 # zero first block of /dev/sda1 and /dev/sda2 to avoid confusion # add sda partitions to mirror sets mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda2 # install boot loader on /dev/md0 and/or on /dev/sdc # maybe some other stuff that I forgot In this case, I set up /dev/md0 with no problem and my system is booting with /dev/md0 as the /boot filesystem. mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc2 missing The difference between the way I did it an your suggestion is I had missing before the /dev/sdc2. mdadm --create /dev/md1 --level=1 --raid-devices=2 missing /dev/sdc2 Was accomplished as was the 'dd if=/dev/zero of=/dev/md1 bs=512 count=1' I tried this with and witout a filesystem on /dev/md1. -- Jerry Feldman g...@blu.org Boston Linux and Unix PGP key id: 537C5846 PGP Key fingerprint: 3D1B 8377 A3C0 A5F2 ECBB CA3B 4607 4319 537C 5846 signature.asc Description: OpenPGP digital signature ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
Re: pvcreate to raid1 fails
On Mon, Mar 1, 2010 at 5:48 PM, Jerry Feldman g...@blu.org wrote: pe_align (128 sectors) must not be less than pe_align_offset (36028797018963967 sectors) By my calculations, the second number of sectors works out to 16 million terabytes, which makes me suspect the diagnostic itself is broken, or being fed broken data. Nore that /dev/md1 is formatted ext4. Are you trying to import an ext4 filesystem into LVM in place? If so, that won't work. You have to create a new (empty) LV (Logical Volume) and copy the filesystem into the LV. You can do the copy in any number of ways, but mounting both and doing cp -a generally works. If you just want to use /dev/md1 as a new empty PV (Physical Volume), then try erasing the filesystem superblock on /dev/md1 so LVM thinks it is blank. WARNING: The following command will destroy all data on the filesystem! dd if=/dev/zero of=/dev/md1 bs=512 count=1 WARNING: The preceding command will destroy all data on the filesystem! (I know you know what dd if=/dev/zero does; the warnings are for other readers. (Who also don't care about data remanence.)) You can't do an in-place import of a filesystem to an LV because LVM LVs are structured entities which are not always linear, and LVM PVs contain metadata at the start (and maybe also the end), neither of which are how filesystems see block devices. There are probabbly cases where it would be doable in theory, but I don't think anyone's bothered with code. What I am trying to do is to migrate my system (LVM on /dev/sda2) to raid1. I'm not sure if I have the whole picture. Let me pitch a scenario: Existing system is single disk, no RAID sda = Existing disk sda1 = Small boot partition sda2 = Large LVM PV LVM volume group name VolGroup00 One or more filesystems and/or swap spaces in LVM LVs Goal: Migrate system to RAID 1 in-place # make a full backup of everything to offline media # test the backup # add new new disk as /dev/sdc # set partitions on sdc to be same or larger than sda # create degraded mirrors on new disk mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 missing mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc2 missing # initialize md1 as an LVM PV pvcreate /dev/md1 # add new the PV to the existing VG (Volume Group) vgextend VolGroup00 /dev/md1 # make sure that backup is still there # move data to new PV pvmove /dev/sda2 /dev/md1 # get coffee, make sandwich, read book, etc. # remove the now unused old PV from old disk vgreduce VolGroup00 /dev/sda2 # (things get vague now, 'cause I don't want to type everything) # (if anyone wants help on something, say so) # migrate boot partition to /dev/md0 # zero first block of /dev/sda1 and /dev/sda2 to avoid confusion # add sda partitions to mirror sets mdadm --add /dev/md0 /dev/sda1 mdadm --add /dev/md1 /dev/sda2 # install boot loader on /dev/md0 and/or on /dev/sdc # maybe some other stuff that I forgot Everybody got that? ;-) -- Ben ___ gnhlug-discuss mailing list gnhlug-discuss@mail.gnhlug.org http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/