Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Stuart D Gathman

I use a utility that maps bad sectors to files, then move/rename the
files into a bad blocks folder.  (Yes, this doesn't work when critical
areas are affected.)  If you simply remove the files, then
modern disks will internally remap the sectors when they are written
again  - but the quality of remapping implementations varies.

It is more time efficient to just buy a new disk, but with wars and
rumors of wars threatening to disrupt supply chains, including tech,
it's nice to have the skills to get more use from failing hardware.

Plus, it is a challenging problem, which can be fun to work on at leisure.

On Sun, 9 Apr 2023, Roland wrote:


 What is your use case that you believe removing a block in the middle
 of an LV needs to work?


my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

thank you, very valuable!

Am 09.04.23 um 20:53 schrieb Roger Heflin:

On Sun, Apr 9, 2023 at 1:21 PM Roland  wrote:

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.

yes, that's clear to me.


It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.

yes, that critics is totally valid. from a fs point of view you completely
corrupt  the volume, that's clear to me.


What is your use case that you believe removing a block in the middle
of an LV needs to work?

my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...

my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.

since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...


the manpage is telling this:


 Resize an LV by specified PV extents.

 lvresize LV PV ...
 [ -r|--resizefs ]
 [ COMMON_OPTIONS ]



so, that sounds like that i can resize in any direction by specifying extents.



Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.

yes, pvmove is the other approach for that.

but will pvmove continue/finish by all means when moving extents located on a
bad sector ?

the data may be corrupted anywhy, so i thought it's better to skip it.

what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest".  just like "dismiss the problematic
extents and replace with healthy extents".

i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV

roland



Create an LV per device, and when the device is replaced then lvremove
the devices list.  Once a sector/area is bad I would not trust the
sectors until you replace the device.  You may be able to try the
pvmove multiple times and the disk may be able to eventually rebuild
the data.

My experience with bad sectors is once it reports bad the disks will
often rewrite it at the same location and call it "good" when it is
going to report bad again almost immediately, or be a uselessly slow
sector.   Sometimes it will replace the sector on a
re-write/successful read but that seems unreliable.

On non-zfs fs'es I have found the "bad" file and renamed it
badfile. and put it in a dir called badblocks.  So long as the bad
block is in the file data then you can contain the badblock by
containing the bad file.   And since most of the disk will be file
data that should also be a management scheme not requiring a fs
rebuild.

The re-written sector may also be "slow" and it might be wise to treat
those sectors as bad, and in the "slow" sector case pvmove should
actually work.  For that you would need a badblocks that "timed" the
reads to disk and treats any sector taking longer that even say .25
seconds as slow/bad.   At 5400 rpm, .25/250ms translates to around 22
failed re-read tries.   If you time it you may have to do some testing
on the entire group of reads in smaller aligned sectors to figure out
which sector in the main read was bad.  If you scanned often enough
for slows you might catch them before they are completely bad.
Technically the disk is supposed to do that on its scans, but even
when I have turned the scans up to daily it does not seem to act
right.

And I have usually found that the bad "units" are 8 units of 8
512-byte sectors for a total of around 32k (aligned on the disk).

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roger Heflin
On Sun, Apr 9, 2023 at 1:21 PM Roland  wrote:
>
> > Well, if the LV is being used for anything real, then I don't know of
> > anything where you could remove a block in the middle and still have a
> > working fs.   You can only reduce fs'es (the ones that you can reduce)
> > by reducing off of the end and making it smaller.
>
> yes, that's clear to me.
>
> > It makes zero sense to be able to remove a block in the middle of a LV
> > used by just about everything that uses LV's as nothing supports being
> > able to remove a block in the middle.
>
> yes, that critics is totally valid. from a fs point of view you completely
> corrupt  the volume, that's clear to me.
>
> > What is your use case that you believe removing a block in the middle
> > of an LV needs to work?
>
> my use case is creating some badblocks script with lvm which intelligently
> handles and skips broken sectors on disks which can't be used otherwise...
>
> my plan is to scan a disk for usable sectors and map the logical volume
> around the broken sectors.
>
> whenever more sectors get broken, i'd like to remove the broken ones to have
> a usable lv without broken sectors.
>
> since you need to rebuild your data anyway for that disk, you can also
> recreate the whole logical volume.
>
> my question and my project is a little bit academic. i'd simply want to try
> out how much use you can have from some dead disks which are trash 
> otherwise...
>
>
> the manpage is telling this:
>
>
> Resize an LV by specified PV extents.
>
> lvresize LV PV ...
> [ -r|--resizefs ]
> [ COMMON_OPTIONS ]
>
>
>
> so, that sounds like that i can resize in any direction by specifying extents.
>
>
> > Now if you really need to remove a specific block in the middle of the
> > LV then you are likely going to need to use pvmove with specific
> > blocks to replace those blocks with something else.
>
> yes, pvmove is the other approach for that.
>
> but will pvmove continue/finish by all means when moving extents located on a
> bad sector ?
>
> the data may be corrupted anywhy, so i thought it's better to skip it.
>
> what i'm really after is some "remap a physical extent to a healty/reserved
> section and let zfs selfheal do the rest".  just like "dismiss the problematic
> extents and replace with healthy extents".
>
> i'd better like remapping instead of removing a PE, as removing will 
> invalidate
> the whole LV
>
> roland
>


Create an LV per device, and when the device is replaced then lvremove
the devices list.  Once a sector/area is bad I would not trust the
sectors until you replace the device.  You may be able to try the
pvmove multiple times and the disk may be able to eventually rebuild
the data.

My experience with bad sectors is once it reports bad the disks will
often rewrite it at the same location and call it "good" when it is
going to report bad again almost immediately, or be a uselessly slow
sector.   Sometimes it will replace the sector on a
re-write/successful read but that seems unreliable.

On non-zfs fs'es I have found the "bad" file and renamed it
badfile. and put it in a dir called badblocks.  So long as the bad
block is in the file data then you can contain the badblock by
containing the bad file.   And since most of the disk will be file
data that should also be a management scheme not requiring a fs
rebuild.

The re-written sector may also be "slow" and it might be wise to treat
those sectors as bad, and in the "slow" sector case pvmove should
actually work.  For that you would need a badblocks that "timed" the
reads to disk and treats any sector taking longer that even say .25
seconds as slow/bad.   At 5400 rpm, .25/250ms translates to around 22
failed re-read tries.   If you time it you may have to do some testing
on the entire group of reads in smaller aligned sectors to figure out
which sector in the main read was bad.  If you scanned often enough
for slows you might catch them before they are completely bad.
Technically the disk is supposed to do that on its scans, but even
when I have turned the scans up to daily it does not seem to act
right.

And I have usually found that the bad "units" are 8 units of 8
512-byte sectors for a total of around 32k (aligned on the disk).

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.


yes, that's clear to me.


It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.


yes, that critics is totally valid. from a fs point of view you completely
corrupt  the volume, that's clear to me.


What is your use case that you believe removing a block in the middle
of an LV needs to work?


my use case is creating some badblocks script with lvm which intelligently
handles and skips broken sectors on disks which can't be used otherwise...

my plan is to scan a disk for usable sectors and map the logical volume
around the broken sectors.

whenever more sectors get broken, i'd like to remove the broken ones to have
a usable lv without broken sectors.

since you need to rebuild your data anyway for that disk, you can also
recreate the whole logical volume.

my question and my project is a little bit academic. i'd simply want to try
out how much use you can have from some dead disks which are trash otherwise...


the manpage is telling this:


   Resize an LV by specified PV extents.

   lvresize LV PV ...
   [ -r|--resizefs ]
   [ COMMON_OPTIONS ]



so, that sounds like that i can resize in any direction by specifying extents.



Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.


yes, pvmove is the other approach for that.

but will pvmove continue/finish by all means when moving extents located on a
bad sector ?

the data may be corrupted anywhy, so i thought it's better to skip it.

what i'm really after is some "remap a physical extent to a healty/reserved
section and let zfs selfheal do the rest".  just like "dismiss the problematic
extents and replace with healthy extents".

i'd better like remapping instead of removing a PE, as removing will invalidate
the whole LV

roland






Am 09.04.23 um 19:32 schrieb Roger Heflin:

On Sun, Apr 9, 2023 at 10:18 AM Roland  wrote:

hi,

we can extend a logical volume by arbitrary pv extends like this :


root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
Size of logical volume mytestVG/blocks_allocated changed from 1.00
MiB (1 extents) to 2.00 MiB (2 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
Size of logical volume mytestVG/blocks_allocated changed from 2.00
MiB (2 extents) to 3.00 MiB (3 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
Size of logical volume mytestVG/blocks_allocated changed from 3.00
MiB (3 extents) to 4.00 MiB (4 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
Size of logical volume mytestVG/blocks_allocated changed from 4.00
MiB (4 extents) to 5.00 MiB (5 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
LV   Start SSize  Start
blocks_allocated 0  1 0
 0  4 1
blocks_allocated 1  1 5
 0  4 6
blocks_allocated 2  110
 0  411
blocks_allocated 3  115
 0  416
blocks_allocated 4  120
 0 47691721


how can i do this in reverse ?

when i specify the physical extend to be added, it works - but when is
specifcy the physical extent to be removed,
the last one is being removed but not the specified one.

see here for example - i wanted to remove extent number 10 like i did
add it, but instead extent number 20
is being removed

root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
Ignoring PVs on command line when reducing.
WARNING: Reducing active logical volume to 4.00 MiB.
THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
Size of logical volume mytestVG/blocks_allocated changed from 5.00
MiB (5 extents) to 4.00 MiB (4 extents).
Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
LV   Start SSize  Start
blocks_allocated 0  1 0
 0  4 1
blocks_allocated 1  1

Re: [linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roger Heflin
On Sun, Apr 9, 2023 at 10:18 AM Roland  wrote:
>
> hi,
>
> we can extend a logical volume by arbitrary pv extends like this :
>
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
>Size of logical volume mytestVG/blocks_allocated changed from 1.00
> MiB (1 extents) to 2.00 MiB (2 extents).
>Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
>Size of logical volume mytestVG/blocks_allocated changed from 2.00
> MiB (2 extents) to 3.00 MiB (3 extents).
>Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
>Size of logical volume mytestVG/blocks_allocated changed from 3.00
> MiB (3 extents) to 4.00 MiB (4 extents).
>Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
>Size of logical volume mytestVG/blocks_allocated changed from 4.00
> MiB (4 extents) to 5.00 MiB (5 extents).
>Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# pvs --segments
> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>LV   Start SSize  Start
>blocks_allocated 0  1 0
> 0  4 1
>blocks_allocated 1  1 5
> 0  4 6
>blocks_allocated 2  110
> 0  411
>blocks_allocated 3  115
> 0  416
>blocks_allocated 4  120
> 0 47691721
>
>
> how can i do this in reverse ?
>
> when i specify the physical extend to be added, it works - but when is
> specifcy the physical extent to be removed,
> the last one is being removed but not the specified one.
>
> see here for example - i wanted to remove extent number 10 like i did
> add it, but instead extent number 20
> is being removed
>
> root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
>Ignoring PVs on command line when reducing.
>WARNING: Reducing active logical volume to 4.00 MiB.
>THIS MAY DESTROY YOUR DATA (filesystem etc.)
> Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
>Size of logical volume mytestVG/blocks_allocated changed from 5.00
> MiB (5 extents) to 4.00 MiB (4 extents).
>Logical volume mytestVG/blocks_allocated successfully resized.
>
> root@s740:~# pvs --segments
> -olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start
>LV   Start SSize  Start
>blocks_allocated 0  1 0
> 0  4 1
>blocks_allocated 1  1 5
> 0  4 6
>blocks_allocated 2  110
> 0  411
>blocks_allocated 3  115
> 0 47692216
>
>
> how can i remove extent number 10 ?
>
> is this a bug ?
>

Well, if the LV is being used for anything real, then I don't know of
anything where you could remove a block in the middle and still have a
working fs.   You can only reduce fs'es (the ones that you can reduce)
by reducing off of the end and making it smaller.

It makes zero sense to be able to remove a block in the middle of a LV
used by just about everything that uses LV's as nothing supports being
able to remove a block in the middle.

What is your use case that you believe removing a block in the middle
of an LV needs to work?

Now if you really need to remove a specific block in the middle of the
LV then you are likely going to need to use pvmove with specific
blocks to replace those blocks with something else.

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


[linux-lvm] bug? shrink lv by specifying pv extent to be removed does not behave as expected

2023-04-09 Thread Roland

hi,

we can extend a logical volume by arbitrary pv extends like this :


root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:5
  Size of logical volume mytestVG/blocks_allocated changed from 1.00 
MiB (1 extents) to 2.00 MiB (2 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:10
  Size of logical volume mytestVG/blocks_allocated changed from 2.00 
MiB (2 extents) to 3.00 MiB (3 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:15
  Size of logical volume mytestVG/blocks_allocated changed from 3.00 
MiB (3 extents) to 4.00 MiB (4 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# lvresize mytestVG/blocks_allocated -l +1 /dev/sdb:20
  Size of logical volume mytestVG/blocks_allocated changed from 4.00 
MiB (4 extents) to 5.00 MiB (5 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start

  LV   Start SSize  Start
  blocks_allocated 0  1 0
   0  4 1
  blocks_allocated 1  1 5
   0  4 6
  blocks_allocated 2  1    10
   0  4    11
  blocks_allocated 3  1    15
   0  4    16
  blocks_allocated 4  1    20
   0 476917    21


how can i do this in reverse ?

when i specify the physical extend to be added, it works - but when is 
specifcy the physical extent to be removed,

the last one is being removed but not the specified one.

see here for example - i wanted to remove extent number 10 like i did 
add it, but instead extent number 20

is being removed

root@s740:~# lvresize mytestVG/blocks_allocated -l -1 /dev/sdb:10
  Ignoring PVs on command line when reducing.
  WARNING: Reducing active logical volume to 4.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce mytestVG/blocks_allocated? [y/n]: y
  Size of logical volume mytestVG/blocks_allocated changed from 5.00 
MiB (5 extents) to 4.00 MiB (4 extents).

  Logical volume mytestVG/blocks_allocated successfully resized.

root@s740:~# pvs --segments 
-olv_name,seg_start_pe,seg_size_pe,pvseg_start  -O pvseg_start

  LV   Start SSize  Start
  blocks_allocated 0  1 0
   0  4 1
  blocks_allocated 1  1 5
   0  4 6
  blocks_allocated 2  1    10
   0  4    11
  blocks_allocated 3  1    15
   0 476922    16


how can i remove extent number 10 ?

is this a bug ?

regards
roland

___
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/