Re: [linux-lvm] Is MD array (containing a PV) reducible?

2023-10-05 Thread Phillip Susi
"Brian J. Murrell"  writes:

> But I did just think of the other benefit of using MD to sync the new
> device and that's having a recovery path.
>
> That is, if I add luks-backup to /dev/md0 and then wait for it to
> finish syncing, I can then shut down the machine, remove /dev/sdc from
> the machine and start it back up and ensure that all of the needed
> initialization bits are in place to make the luks component come up and
> be readable.  If it is, I just remove it from md0 and use it stand-
> alone (as it is itself a RAID-1 recall).  Reboot again to make sure
> it's all still working and good.
>
> If any of the above goes sideways, I still have the sdc disk as an md0
> member and can put it back in the machine to get back to my starting
> position and try again, trying to figure out where I went wrong.
>
> With pvmove, once the move is complete there is no going back to my
> starting position if something is not right and I cannot access my new
> luks device.

Instead of pvmove, you can use lvconvert to have LVM mirror across the
drives, then break the mirror, like you would with mdadm.

> So given that, and given that I have stared my pvmove with --atomic and
> --interval, do I just SIGINT the pvmove?  Or should I do pvmove --abort
> on another terminal?  How can I know when the the abort is complete and
> that I can vgreduce luks-backup out of the backups volume group?

lvs or lvdisplay will show if the move is still in progress.

> Once I have vgreduced, is there anything I should do to
> /dev/mapper/luks-backup before I mdadm --manage /dev/md0 --add
> /dev/mapper/luks-backup just to wipe any remnants of it being a PV
> previously?

pvremove will wipe the pv label.

> I really do appreciate all of the help, advise and patience you have
> given/shown me.

You are welcome.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Is MD array (containing a PV) reducible?

2023-10-04 Thread Phillip Susi
"Brian J. Murrell"  writes:

> So just to confirm, given:
>
> sdc 8:32   0   3.7T  0 disk  
> └─md0   9:00   3.7T  0 raid1 
>   ├─backups-backups   253:13   0   2.5T  0 lvm   
> /.snapshots
>   ├─backups-borg  253:14   0   300G  0 lvm   
>   └─backups-vdo--test 253:15   0   700G  0 lvm   
> └─vdo-backups 253:73   0   1.8T  0 vdo   
> sdd 8:48   0   3.7T  0 disk  
> └─md1   9:10   3.7T  0 raid1 
>   └─luks-backup   253:40   0   3.7T  0 crypt 
>
> I want to do:
>
> # pvcreate /dev/mapper/luks-backup
> # vgextend backups /dev/mapper/luks-backup
> # pvmove /dev/md0 /dev/luks-backup
> # vgreduce backups /dev/md0
> # mdadm --stop /dev/md0
> # mdadm --zero-superblocks /dev/sdc
> # mdadm /dev/md1 --add /dev/sdc
>
> Correct?

Yep.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Is MD array (containing a PV) reducible?

2023-10-02 Thread Phillip Susi


"Brian J. Murrell"  writes:

> Just to take advantage of MD's online/background sync to get the data
> from md0 array to the md1 array.

If you weren't using LVM, then that would be a good idea, but LVM can do
that too.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Is MD array (containing a PV) reducible?

2023-10-02 Thread Phillip Susi

"Brian J. Murrell"  writes:

> Because I need to add another member to the array that is just slightly
> smaller than the existing array.  It's the same sized disk as what's in
> the array currently but is itself an MD array with luks, which is
> reserving it's own metadata region from the total disk making it
> slightly smaller.

You want to put an MD array inside of another MD array?  Why?

> So yes, ultimately I am going to have both an encrypted and unencrypted
> member of this raid-1 array.  At least until the encrypted member is
> added and then I will remove the unencrypted member and add it to the
> luks encrypted array.
>
> The general idea is to (safely -- as in be able to achieve in specific
> sepearate steps that have a back-out plan) convert a non-encrypted
> raid-1 array into an encrypted raid-1 array.
>
> So what I have right now is:
> sdc 8:32   0   3.7T  0 disk  
> └─md0   9:00   3.7T  0 raid1 
>   ├─backups-backups   253:13   0   2.5T  0 lvm   
> /.snapshots
>   ├─backups-borg  253:14   0   300G  0 lvm   
>   └─backups-vdo--test 253:15   0   700G  0 lvm   
> └─vdo-backups 253:73   0   1.8T  0 vdo   
> sdd 8:48   0   3.7T  0 disk  
> └─md1   9:10   3.7T  0 raid1 
>   └─luks-backup   253:40   0   3.7T  0 crypt 

I would suggest that rather than

> I am going to reduce the size of md0 so that it's the same size as md1,
> then add md1 to md0 (so yes, I will have a raid-1 array with a raid-1
> member).  Then I am going to remove md0 from the md1 array and then add
> sdc to md1 so that md0 no longer exists and md1 is the primary raid-1
> array.  Ideally, rename md1 to md0 once done but that's aesthetic.

I would suggest instead that you add the luks volume as a new PV to your
VG, then use pvmove to migrate all of your logical volumes from md0 to
md1, then vgreduce md0 from the VG, stop md0, mdadm --zero-superblocks
/dev/sdc, then add sdc to md1.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Is MD array (containing a PV) reducible?

2023-10-02 Thread Phillip Susi


"Brian J. Murrell"  writes:

> I can shrink this array down to 3,906,886,464 without impacting the PV
> (and it's contents) correct?

Looks that way.  That would be how much space is left in the array that
is not quite enough for another 4 MiB PE.

> From my calculations there is 4,022,272 bytes of unused space after the
> PV to the end of the array so shrinking it by 1,073,152 shouldn't have
> any impact, correct?

It shouldn't, but why bother?

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Swapping LLVM drive

2023-09-08 Thread Phillip Susi


Stuart D Gathman  writes:

> 1. Unnecessary copying. 2. You lose your free backup of the system on
> the old drive,
>which should be carefully labeled and kept handy for a year.
>(After that, SSDs start to run the risk of data retention issues.)

You also have unnecessary copying with dd, but yea, partclone can save
on that.  If you want to keep the free backup with LVM, then instead of
pvmove you can lvconvert the volumes to mirrors, then split the
mirrors.  The nice thing about doing it with LVM is that it can happen
in the background while you still use the computer instead of having to
boot from removable media and wait hours and hours for the copy.

>> Don't forget you'll need to reinstall grub on the new drive for it to
>> boot.
>
> And that is the most important reason.  "Just reinstall grub" is a
> much larger learning curve than "dd" IMO.

Maybe a bit larger, but something that is always good to know and worth
learning.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Swapping LLVM drive

2023-08-28 Thread Phillip Susi


Stuart D Gathman  writes:

> Use dd to copy the partition table (this also often contains boot code)
> to the new disk on USB.
> Then use dd to copy the smaller partitions (efi,boot). Now use cfdisk
> to delete the 3rd partition. Expand the boot partition to 1G (you'll
> thank me later).
> Allocate the entire rest of the disk to p3.
> Create a new vg with a different name.  Allocate root and swap on
> new VG the same sizes.
> Take a snapshot of current root (delete swap on old drive since you
> didn't leave yourself any room), and use partclone to efficiently
> copy the filesystem over to new root.

Why would you use dd/partclone instead of just having LVM move
everything to the new drive on the fly?

Partition the new drive, use pvcreate to initialize the partition as a
pv, vgextend to add the pv to the existing vg, pvmove to evacuate the
logical volumes from the old disk, then vgreduce to remove it from the
vg.

Don't forget you'll need to reinstall grub on the new drive for it to
boot.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] badblocks

2023-05-22 Thread Phillip Susi

graeme vetterlein  writes:

> I was able to "replace" 1 broken 2TB 1 working 2TB drive with a new
> 4TB drive
> without any filesystem creation, copying etc, just using LVM commands:

>     umount /dev/mapper/SAMSUNG_2TB-data
>     umount /dev/mapper/SAMSUNG_2TB-vimage
>     lvchange -an  SAMSUNG_2TB/data
>     lvchange -an  SAMSUNG_2TB/vimage
>     vgmerge  BARRACUDA_4TB SAMSUNG_2TB     -- I believe this puts
> everything into BARRACUDA_4TB (oddly right to left)
>     pvmove /dev/sdc1         -- Moves everything off sdc1
>     vgreduce BARRACUDA_4TB /dev/sdc1    -- Nothing should be in sdc1,
> so drop it from the group

If they were in the same vg, you wouldn't even have to umount the
filesystem.

> 19,000 hours, I get a popup warnings almost every day now. Smart shows
> it has NO
> reallocated sectors.

What "pop up warning??  What does smartctl -H say about the drive?  I
don't see anything below that indcicates there is anything wrong with
the drive at all.

> SMART Attributes Data Structure revision number: 16
> Vendor Specific SMART Attributes with Thresholds:
> ID# ATTRIBUTE_NAME  FLAG VALUE WORST THRESH TYPE UPDATED 
> WHEN_FAILED RAW_VALUE
>   1 Raw_Read_Error_Rate 0x000b   100   100   016    Pre-fail
> Always   -   0
>   2 Throughput_Performance  0x0005   140   140   054    Pre-fail
> Offline  -   69
>   3 Spin_Up_Time    0x0007   127   127   024    Pre-fail
> Always   -   296 (Average 299)
>   4 Start_Stop_Count    0x0012   100   100   000    Old_age
> Always   -   3554
>   5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail
> Always   -   0
>   7 Seek_Error_Rate 0x000b   100   100   067    Pre-fail
> Always   -   0
>   8 Seek_Time_Performance   0x0005   124   124   020    Pre-fail
> Offline  -   33
>   9 Power_On_Hours  0x0012   098   098   000    Old_age
> Always   -   19080
>  10 Spin_Retry_Count    0x0013   100   100   060    Pre-fail
> Always   -   0
>  12 Power_Cycle_Count   0x0032   100   100   000    Old_age
> Always   -   2959
> 192 Power-Off_Retract_Count 0x0032   097   097   000    Old_age
> Always   -   3616
> 193 Load_Cycle_Count    0x0012   097   097   000    Old_age
> Always   -   3616
> 194 Temperature_Celsius 0x0002   250   250   000    Old_age
> Always   -   24 (Min/Max 13/45)
> 196 Reallocated_Event_Count 0x0032   100   100   000    Old_age
> Always   -   0
> 197 Current_Pending_Sector  0x0022   100   100   000    Old_age
> Always   -   0
> 198 Offline_Uncorrectable   0x0008   100   100   000    Old_age
> Offline  -   0
> 199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age
> Always   -   57
>
>
> Now, I know it's possible that these CRC errors are e.g. 'cable related' but
> I've swapped the cable and moved SATA ports to no effect. In the end I
> decided
> 10 years was enough and bought a new drive.

Yes, those are just errors going over the SATA link.  They would have
been retried and you never noticed.  The question is whether you see any
errors in dmesg or your kernel logs, or reported from badblocks.  Or you
might ask smartctl to have the drive run its own internal test with
smartctl -t long.  The advantage of this over badblocks is that it
doesn't have to waste resources actually sending the data to the CPU
just to test if it can read the disk.  You can then check the drive's
log with smartctl -l selftest to see if it found any errors.

> Any hints? lvm2 commands? I can RTFM but a pointing finger would help.

You can bypass LVM and directly manipulate the device mapper with the
dmsetup command.  Doing this, you can do various other things such as
insert a fake "bad sector" for testing purposes, but you will have to
set up a script in your initramfs to configure the table on every boot.

> Debian. I'm thinking I'll probably use LVM2 and raid striping (so I
> will have VG
> with many PV in them :-) )

That's one way to go, but if you are currently keeping them as separate
filesystems, you might be interested in looking up snapraid.  It lets
you create parity to be able to recover from file or disk loss like
raid5/6, but not in real time.  It's handy for several disks that
contain files that are rarely written to such as media files.  You can
keep your media disks in standby mode except for the one disk that
contains the file you want to play right now, rather than having to wake
all of the disks up as with raid5/6.  You can also keep your parity
disk(s) offline and drop them in an eSATA dock just to update the parity
when you do modify the files on the data disks.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] badblocks

2023-05-22 Thread Phillip Susi

graeme vetterlein  writes:

> I have a desktop Linux box (Debian Sid) with a clutch of disks in it
> (4 or 5)  and have mostly defined each disk as a volume group.

Why?  The prupose of a VG is to hold multiple PVs.

> Now a key point is *some of the disks are NAS grade disks.* This means
> they do NOT reallocate bad sectors silently. They report IO errors and 
> leave it to the OS (i.e. the old fashion way of doing this)

Are you sure that you are not confusing the ERC feature here?  That lets
the drive give up in a reasonable amount of time and report the (read)
error rather than keep trying.  Most often there is nothing wrong with
the disk physically and writing to the sector will succeed.  If it
doesn't, then the drive will remap it.

> Then *the penny dropped!* The only component that has a view on the
> real physical disk is lvm2 and in particular the PV ...so if anybody
> can mark(and avoid) badblocks it's the PV...so I should be looking for 
> something akin to the -cc option of fsck , applied to a PV command?

Theoretically yes, you can create a mapping to relocate the block
elsewhere, but there are no tools that I am aware of to help with this.
Again, are you sure there is actually something wrong with the disk?
Try writing to the "bad block" and see what happens.  Every time I have
done this in the last 20 years, the problem just goes away.  Usually
without even triggering a reallocation.  This is because the data just
got a little scrambled even though there is nothing wrong with the
medium.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] how to convert a disk containing a snapshot to a snapshot lv?

2021-12-21 Thread Phillip Susi

Tomas Dalebjörk  writes:

> hi
>
> I think I didn’t explain this clear enough 
> Allthe lvm data is present in the snapshot that I provision from our backup 
> system 
> I can guarantee that!
>
> If I just mount that snapshot from our backup system, it works perfectly well
>
> so we don’t need the origin volumes in other way than copying back to it
> we just need to reanimate it as a cow volume
> mentioning that all data has been changed
> the cow is just referencing to the origin location, so no problem there
> All our data is in the cow volume, not just the changes


Ok, so you have thrown out the snapshot relationship when you made your
backup, and backed up both the origin and the snapshot as two separate
backups?  Thus your backup requires much more space than the original
system did since all of the common data has now been duplicated in both
backups.  Now it seems you want to restore both backups, but NOT pay the
storage penalty for duplicating the common parts.  I don't think there
is really a good way of doing that.

I'd say that you can restore the origin volume, then create a snapshot
of that, then run some sort of binary diff between the origin and the
backup of the snapshot and write that diff to the new snapshot.


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] how to convert a disk containing a snapshot to a snapshot lv?

2021-12-20 Thread Phillip Susi

I'm confused.  What exactly are you trying to do here?  I can't think of
any reason why you would want to copy a snapshot lv to a file then try
to loop mount it.

Tomas Dalebjörk  writes:

> Hi,
>
> I am trying to understand how to convert a disk containing snapshot data.
> This is how I tested this:
> 1. locate the snapshot testlv.211218.232255
> root@debian10:/dev/mapper# lvs
>   LV   VG  Attr   LSize   Pool Origin Data%
>  Meta%  Move Log Cpy%Sync Convert
>   home debian10-vg -wi-ao   1.00g
>   root debian10-vg -wi-ao  <3.81g
>   swap_1   debian10-vg -wi-ao 976.00m
>   testlv   debian10-vg owi-aos--- 100.00m
>   testlv.211218.232255 debian10-vg swi-a-s--- 104.00m  testlv 1.44
> root@debian10:/dev/mapper#
>
> 2. copy the lv - cow data to a file
> # dd if=/dev/mapper/debian10--vg-testlv.211218.232255-cow of=/tmp/out
> bs=1024
>
> 3. Setup a loopback disk for the file
> # losetup -fP /tmp/out
>
> 4. Verify that disk exists
> root@debian10:/dev/mapper# losetup -a
> /dev/loop0: [65025]:39274 (/tmp/out)
> root@debian10:/dev/mapper#
>
> 5. Try converting the disk using lvconvert command
> # lvconvert -Zn -s debian10-vg/testlv /tmp/out
>   "/tmp/out": Invalid path for Logical Volume.
>
> 6. Trying creating a softlink in /dev/mapper
> # ln -s /tmp/out debian10--vg-loopback
>
> 7. verify link
> root@debian10:/dev/mapper# ls -la
> total 0
> drwxr-xr-x  2 root root 240 Dec 19 17:40 .
> drwxr-xr-x 18 root root3820 Dec 19 17:22 ..
> crw---  1 root root 10, 236 Dec 10 19:10 control
> lrwxrwxrwx  1 root root   7 Dec 10 19:10 debian10--vg-home -> ../dm-5
> lrwxrwxrwx  1 root root   8 Dec 19 17:40 debian10--vg-loopback ->
> /tmp/out
> lrwxrwxrwx  1 root root   7 Dec 13 22:20 debian10--vg-root -> ../dm-1
> lrwxrwxrwx  1 root root   7 Dec 10 19:10 debian10--vg-swap_1 -> ../dm-4
> lrwxrwxrwx  1 root root   7 Dec 18 23:23 debian10--vg-testlv -> ../dm-0
> lrwxrwxrwx  1 root root   7 Dec 18 23:23
> debian10--vg-testlv.211218.232255 -> ../dm-6
> lrwxrwxrwx  1 root root   7 Dec 18 23:23
> debian10--vg-testlv.211218.232255-cow -> ../dm-3
> lrwxrwxrwx  1 root root   7 Dec 18 23:23 debian10--vg-testlv-real ->
> ../dm-2
> root@debian10:/dev/mapper#
>
> 8. retrying lvconvert command
> root@debian10:/dev/mapper# lvconvert -Zn -s debian10-vg/testlv
> /dev/mapper/debian10--vg-loopback
>   Failed to find logical volume "debian10-vg/loopback"
> root@debian10:/dev/mapper#
>
> Are there more things to be considered, such as recreating pv data on disk?
>
> Regards Tomas
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://listman.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Mirror allocation policy

2021-01-06 Thread Phillip Susi


Alasdair G Kergon writes:

> So despite that, it's not letting you have the 'anywhere' for the log
> part of the allocation.

Based on the number of extents it said it was looking for, I didn't
think it was the log that it couldn't place.

> However, for what you are doing, maybe you don't need an on-disk mirror
> log or can temporarily borrow a little space (add small temporary
> loopback PV to the VG?) for it?

I could have sworn that I had tried --corelog yesterday and it still
didn't work, but today it did.  Weird.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] Mirror allocation policy

2021-01-05 Thread Phillip Susi


Alasdair G Kergon writes:

> On Tue, Jan 05, 2021 at 02:31:00PM -0500, Phillip Susi wrote:
>> How can I forge it to make the mirror on a single pv?  
>
> If --alloc anywhere isn't doing the trick, you'll need to dig into
> the - output to try to understand why.  There might be
> a config option setting getting in the way disabling the choice you want
> it to make, or if it's an algorithmic issue you might try to coax it to

I'm not seeing a whole lot here other than what appears to be this flat
out refusal to use extents from the same pv:

#metadata/lv_manip.c:2535 Not using free space on existing
 parallel PV /dev/md1.

Limiting the allocation to two specific extent ranges does not help either.

Full log:

#libdm-config.c:1061   Setting
 allocation/mirror_logs_require_separate_pvs to 0
 #metadata/lv_manip.c:3439 Adjusted allocation request to 7681
 logical extents. Existing size 0. New size 7681.
 #metadata/lv_manip.c:3442 Mirror log of 1 extents of size 8192
 sectors needed for region size 4096.
 #libdm-config.c:1061   Setting allocation/maximise_cling to 1
 #metadata/pv_map.c:54 Allowing allocation on /dev/md1 start PE
 2561 length 23039
 #metadata/pv_map.c:54 Allowing allocation on /dev/md1 start PE
 128000 length 347469
 #metadata/lv_manip.c:2321 Parallel PVs at LE 0 length 7680:
 /dev/md1
 #metadata/lv_manip.c:3186 Trying allocation using contiguous
 policy.
 #metadata/lv_manip.c:2788 Areas to be sorted and filled
 sequentially.
 #metadata/lv_manip.c:2704 Still need 7681 total extents from
 370508 remaining (0 positional slots):
 #metadata/lv_manip.c:2707   1 (1 data/0 parity) parallel areas
 of 7680 extents each
 #metadata/lv_manip.c:2711   1 metadata area of 1 extents each
 #metadata/lv_manip.c:2535 Not using free space on existing
 parallel PV /dev/md1.
 #metadata/lv_manip.c:3186 Trying allocation using cling policy.
 #metadata/lv_manip.c:2783 Cling_to_allocated is set
 #metadata/lv_manip.c:2786 1 preferred area(s) to be filled
 positionally.
 #metadata/lv_manip.c:2704 Still need 7681 total extents from
 370508 remaining (1 positional slots):
 #metadata/lv_manip.c:2707   1 (1 data/0 parity) parallel areas
 of 7680 extents each
 #metadata/lv_manip.c:2711   1 metadata area of 1 extents each
 #metadata/lv_manip.c:2535 Not using free space on existing
 parallel PV /dev/md1.
 #metadata/lv_manip.c:3186 Trying allocation using normal
 policy.
 #metadata/lv_manip.c:2783 Cling_to_allocated is set
 #metadata/lv_manip.c:2788 Areas to be sorted and filled
 sequentially.
 #metadata/lv_manip.c:2704 Still need 7681 total extents from
 370508 remaining (0 positional slots):
 #metadata/lv_manip.c:2707   1 (1 data/0 parity) parallel areas
 of 7680 extents each
 #metadata/lv_manip.c:2711   1 metadata area of 1 extents each
 #metadata/lv_manip.c:2535 Not using free space on existing
 parallel PV /dev/md1.
 #metadata/lv_manip.c:2783 Cling_to_allocated is not set
 #metadata/lv_manip.c:2788 Areas to be sorted and filled
 sequentially.
 #metadata/lv_manip.c:2704 Still need 7681 total extents from
 370508 remaining (0 positional slots):
 #metadata/lv_manip.c:2707   1 (1 data/0 parity) parallel areas
 of 7680 extents each
 #metadata/lv_manip.c:2711   1 metadata area of 1 extents each
 #metadata/lv_manip.c:2535 Not using free space on existing
 parallel PV /dev/md1.
 #metadata/lv_manip.c:3220   Insufficient suitable allocatable extents
 for logical volume : 7681 more required
 

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[linux-lvm] Mirror allocation policy

2021-01-05 Thread Phillip Susi
I seem to remember ( and found some references on the web ) that there
used to be a way to change the allocation policy when creating a mirror
from the default of strict ( must use different pvs ), but I can't find
a mention of strict in the man pages these days.  I did see the option
for --alloc anywhere but lvconvert still refuses to change an lv into a
mirror saying that it can't find the extents ( there are plenty of
extents, but only one pv ).

How can I forge it to make the mirror on a single pv?  Alternatively,
how can I create a copy of an lv ( I was planning to mirror, then
--splitmirror ).

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] LVM says physical volumes are missing, but they are not

2016-05-30 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 05/30/2016 04:15 AM, Zdenek Kabelac wrote:
> Hi
> 
> Please provide full 'vgchage -ay -' trace of such activation
> command.
> 
> Also specify which version of lvm2 is in use by your distro.

2.02.133-1ubuntu10.  I ended up fixing it by doing a vgcfgbackup,
manually editing the text file to remove the MISSING flags, and then
vgcfgrestore.  My guess is that when I initially installed 16.04, I
forgot to install mdadm, and so the init scripts must have forced a
partial activation without the raid pvs present, and that set the
MISSING flag on those pvs in the metadata on the remaining drive.
Once set, there appears to be no way to remedy this other than what I
ended up doing.  It seems to me that having the volume actually
present should override the MISSING flag in the metadata, or at least
you should be able to clear the flag with pvchange.



-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCgAGBQJXTJooAAoJEBB5UWFcu6UWfoEH/iYx2giltU+WiJO382qWCzy7
g73itTdrC9oTsQVtYaMqHNtnZQfjxBIzCilpSMfMU7xpvmWGA8rCgNBpc6CNx/29
YB4T4nEEvFgklGPfq6PLDrBwxgOWkiBgwbDtjxwrFAC2mDerx/Y5SXZHYZmne08t
aKT6noyDAaepMlxxzFg3YpD0zI4brcvQZ2WPlZy0T2KxfxDriwwcD6U2pyRla6Gf
7KMO5orF6lSqzDXogfWb4AEjSgiFAkTYsgV49gouunWsyJIsur7BFVtV9MmrEgOP
3CQmAdhVowefe3K2OSxZ3luBLT0aWD4yGo285w+/IhfTzeAC3b9srmFC2eU9Szo=
=mNGi
-END PGP SIGNATURE-

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] LVM says physical volumes are missing, but they are not

2016-05-29 Thread Phillip Susi
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

I upgraded last night to Ubuntu 16.04, and LVM has ( even when booting
back into 15.10 ) decided that two of my three physical volumes are
missing, but they are not:

psusi@faldara:~$ sudo pvs
  PV VG  Fmt  Attr PSize   PFree
  /dev/md0   faldara lvm2 a-m  101.91g  58.91g
  /dev/md1   faldara lvm2 a-m1.69t 264.86g
  /dev/sdb1  faldara lvm2 a--1.82t   1.81t
psusi@faldara:~$ sudo vgchange -ay
  Refusing activation of partial LV faldara/Music.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/Videos.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/swap.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/trusty-old.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/vivid.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/pool0.  Use
'--activationmode partial' to override.
  Refusing activation of partial LV faldara/xenial.  Use
'--activationmode partial' to override.
  6 logical volume(s) in volume group "faldara" now active

If I use --activationmode partial, then all volumes are correctly
mapped, with no missing extents:

psusi@faldara:~$ sudo dmsetup table faldara-vivid
0 44040192 linear 9:1 2468358144
44040192 24969216 linear 9:1 2996840448
69009408 14876672 linear 8:17 2048
psusi@faldara:~$ sudo dmsetup table faldara-xenial
0 83886080 linear 9:0 98304

How can it claim the pvs are missing, and then go ahead and use them?
 Is there a way to manually clear the missing flag from the pv?

-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iQEcBAEBCgAGBQJXSx3JAAoJEBB5UWFcu6UWR2kIAKZIGGP3cM09FZqgtkj9VweH
upjc6Sw79jrPBVDbaZQICoFM5rrqra3ehWRge3nKNIZ78QRW9jKJoz9H1MJtFtCd
npfE4s/0URL/UTjChHzV4VR4RHU5bg8NzfeQjyb89k+yTPw6I0zEbmDDlk0H5LCD
9Uho3WILFHrrYE9C/PE1Kb7qx5y9gp/64PYEkRXsVKvvfOq6BPKLlMyjpplVYuxV
wZ7sLpwzOkC2iqex/OleqaiOPrroH2vasVwWtuHBpxFe4z1JsXTHdSSJ8HtrEdii
JnX9ff9fygl7OsOyldSkIZhfuYyPBXmae1qQwLa8SJyurnUE9vE18JFqZFHTGhg=
=Fn0J
-END PGP SIGNATURE-

___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/