On Tue, Mar 5, 2019 at 12:45 AM Cesare Leonardi wrote:
> On 02/03/19 21:25, Nir Soffer wrote:
> > # mkfs.xfs /dev/test/lv1
> > meta-data=/dev/test/lv1 isize=512agcount=4, agsize=25600
> blks
> > = sectsz=512 attr=2, projid32bit=1
> > =
On Tue, Mar 5, 2019 at 11:30 AM Ilia Zykov wrote:
> Hello.
>
> >> THAT is a crucial observation. It's not an LVM bug, but the filesystem
> >> trying to read 1024 bytes on a 4096 device.
> > Yes that's probably the reason. Nevertheless, its not really the FS's
> fault, since it was moved by LVM t
On Tue, 5 Mar 2019, David Teigland wrote:
On Tue, Mar 05, 2019 at 06:29:31PM +0200, Nir Soffer wrote:
Maybe LVM should let you mix PVs with different logical block size, but it
should
require --force.
LVM needs to fix this, your solution sounds like the right one.
Also, since nearly every m
On Tue, Mar 05, 2019 at 06:29:31PM +0200, Nir Soffer wrote:
> I don't this way of thinking is useful. If we go in this way, then write()
> should not
> let you write data, and later maybe the disk controller should avoid this?
>
> LVM is not a low level tool like dd. It is high level tool for mana
On 05.03.2019 10:29, Ilia Zykov wrote:
> Hello.
>
>>> THAT is a crucial observation. It's not an LVM bug, but the filesystem
>>> trying to read 1024 bytes on a 4096 device.
>> Yes that's probably the reason. Nevertheless, its not really the FS's fault,
>> since it was moved by LVM to a 4069 de
Hello.
>> THAT is a crucial observation. It's not an LVM bug, but the filesystem
>> trying to read 1024 bytes on a 4096 device.
> Yes that's probably the reason. Nevertheless, its not really the FS's fault,
> since it was moved by LVM to a 4069 device.
> The FS does not know anything about the
On 05.03.2019 00:22, Nir Soffer wrote:
> On Tue, Mar 5, 2019 at 12:45 AM Cesare Leonardi wrote:
>
>> On 02/03/19 21:25, Nir Soffer wrote:
>>> # mkfs.xfs /dev/test/lv1
>>> meta-data=/dev/test/lv1 isize=512agcount=4, agsize=25600
>> blks
>>> = sectsz=512
On 05.03.2019 01:12, Stuart D. Gathman wrote:
> On Mon, 4 Mar 2019, Cesare Leonardi wrote:
>
>> Today I repeated all the tests and indeed in one case the mount failed:
>> after pvmoving from the 512/4096 disk to the 4096/4096 disk, with the LV
>> ext4 using 1024 block size.
> ...
>> The error h
On Mon, 4 Mar 2019, Cesare Leonardi wrote:
Today I repeated all the tests and indeed in one case the mount failed: after
pvmoving from the 512/4096 disk to the 4096/4096 disk, with the LV ext4 using
1024 block size.
...
The error happened where you guys expected. And also for me fsck showed n
On 02/03/19 21:25, Nir Soffer wrote:
# mkfs.xfs /dev/test/lv1
meta-data=/dev/test/lv1 isize=512 agcount=4, agsize=25600 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=0,
rmapbt=0, reflink=0
da
On 04/03/19 10:12, Ingo Franzki wrote:
# blockdev -v --getss --getpbsz --getbsz /dev/sdb
get logical block (sector) size: 512
get physical block (sector) size: 512
get blocksize: 4096
You display the physical block size of /dev/sdb here, but you use /dev/sdb5
later on.
Not sure if this makes a
On 02.03.2019 02:36, Cesare Leonardi wrote:
> Hello Ingo, I've made several tests but I was unable to trigger any
> filesystem corruption. Maybe the trouble you encountered are specific to
> encrypted device?
>
> Yesterday and today I've used:
> Debian unstable
> kernel 4.19.20
> lvm2 2.03.02
>
On Sat, Mar 2, 2019 at 3:38 AM Cesare Leonardi wrote:
> Hello Ingo, I've made several tests but I was unable to trigger any
> filesystem corruption. Maybe the trouble you encountered are specific to
> encrypted device?
>
> Yesterday and today I've used:
> Debian unstable
> kernel 4.19.20
> lvm2 2
>>
>> smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size: 512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor: 3.5 inches
>> 4096
>> 512
>>
>> As yo
On 2/27/2019 9:05 AM, Ingo Franzki wrote:
> Yes that should work:
> # losetup -fP loopbackfile.img --sector-size 4096
> # blockdev --getpbsz /dev/loop0
> 4096
>
-
Something I noticed that is troublesome. When I first got my 4K sectory
size disks, one of the numbers in the kernel listed it a
Hello Ingo, I've made several tests but I was unable to trigger any
filesystem corruption. Maybe the trouble you encountered are specific to
encrypted device?
Yesterday and today I've used:
Debian unstable
kernel 4.19.20
lvm2 2.03.02
e2fsprogs 1.44.5
On 01/03/19 09:05, Ingo Franzki wrote:
Hmm
On 01.03.2019 02:24, Cesare Leonardi wrote:
> On 28/02/19 09:41, Ingo Franzki wrote:
>> Well, there are the following 2 commands:
>>
>> Get physical block size:
>> blockdev --getpbsz
>> Get logical block size:
>> blockdev --getbsz
>
> I didn't know the blockdev command and, to recap, we have
On 01.03.2019 04:41, Stuart D. Gathman wrote:
> On Fri, 1 Mar 2019, Cesare Leonardi wrote:
>
>> I've done the test suggested by Stuart and it seems to contradict this.
>> I have pvmoved data from a 512/512 (logical/physical) disk to a newly added
>> 512/4096 disk but I had no data corruption. Unf
On Fri, 1 Mar 2019, Cesare Leonardi wrote:
I've done the test suggested by Stuart and it seems to contradict this.
I have pvmoved data from a 512/512 (logical/physical) disk to a newly added
512/4096 disk but I had no data corruption. Unfortunately I haven't any
native 4k disk to repeat the sa
On 28/02/19 09:41, Ingo Franzki wrote:
Well, there are the following 2 commands:
Get physical block size:
blockdev --getpbsz
Get logical block size:
blockdev --getbsz
I didn't know the blockdev command and, to recap, we have:
--getpbsz: physical sector size
--getss: logical sector size
-
> At the time the file system was created (possibly may years ago), I did not
> know that I would ever move it to a device with a larger block size.
>
For this purpose all 4k disks have logical sector size 512.
Don't look at "blockdev --getbsz" it's not property of physical(real)
device.
smi
On 28.02.2019 15:36, Ilia Zykov wrote:
>> Discarding device blocks: done
>> Creating filesystem with 307200 1k blocks and 76912 inodes
>> ..
>> # pvs
>> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
>> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid
> Discarding device blocks: done
> Creating filesystem with 307200 1k blocks and 76912 inodes
> ..
> # pvs
> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
> /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
> /dev/LOOP_VG/LV: read failed aft
>
>>
>> For the problem mentioned in this thread, the physical block size is what
>> you are looking for.
>>>
>
> I think it is BUG in the "blockdev(util-linux)".
Tt's not a bug it's a feature :O
https://bugzilla.redhat.com/show_bug.cgi?id=1684078
> My question was:
>
> Can this error(or simi
>>
>> smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size: 512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor: 3.5 inches
>> 4096
>> 512
>>
>> As yo
On 28.02.2019 10:48, Ilia Zykov wrote:
>>
>> Well, there are the following 2 commands:
>>
>> Get physical block size:
>> blockdev --getpbsz
>> Get logical block size:
>> blockdev --getbsz
>>
>> Filesystems seem to care about the physical block size only, not the logical
>> block size.
>>
>> S
>
> Well, there are the following 2 commands:
>
> Get physical block size:
> blockdev --getpbsz
> Get logical block size:
> blockdev --getbsz
>
> Filesystems seem to care about the physical block size only, not the logical
> block size.
>
> So as soon as you have PVs with different physic
On 28.02.2019 02:31, Cesare Leonardi wrote:
> On 27/02/19 09:49, Ingo Franzki wrote:
>> As far as I can tell: Yes if you pvmove data around or lvextend an LV onto
>> another PV with a larger physical block size that is dangerous.
>> Creating new LVs and thus new file systems on mixed configuration
On Thu, 28 Feb 2019, Cesare Leonardi wrote:
Not to be pedantic, but what do you mean with physical block? Because with
modern disks the term is not always clear. Let's take a mechanical disk with
512e sectors, that is with 4k sectors but exposed as 512 byte sectors. Fdisk
will refer to it with
On 27/02/19 09:49, Ingo Franzki wrote:
As far as I can tell: Yes if you pvmove data around or lvextend an LV onto
another PV with a larger physical block size that is dangerous.
Creating new LVs and thus new file systems on mixed configurations seem to be
OK.
[...]
And yes, its unrelated to
On 27.02.2019 15:59, Stuart D. Gathman wrote:
> On Wed, 27 Feb 2019, Ingo Franzki wrote:
>
>> The good thing about the example with encrypted volumes on loopback
>> devices is that you can reproduce the problem on any platform, without
>> having certain hardware requirements.
>
> The losetup comm
On Wed, 27 Feb 2019, Ingo Franzki wrote:
The good thing about the example with encrypted volumes on loopback
devices is that you can reproduce the problem on any platform, without
having certain hardware requirements.
The losetup command has a --sector-size option that sets the logical
sector
On 27.02.2019 01:00, Cesare Leonardi wrote:
> On 25/02/19 16:33, Ingo Franzki wrote:
>> we just encountered an error when using LVM's pvmove command to move the
>> data from an un-encrypted LVM physical volume onto an encrypted volume.
>> After the pvmove has completed, the file system on the logi
On 25/02/19 16:33, Ingo Franzki wrote:
we just encountered an error when using LVM's pvmove command to move the data
from an un-encrypted LVM physical volume onto an encrypted volume.
After the pvmove has completed, the file system on the logical volume that
resides on the moved physical volume
Hi,
we just encountered an error when using LVM's pvmove command to move the data
from an un-encrypted LVM physical volume onto an encrypted volume.
After the pvmove has completed, the file system on the logical volume that
resides on the moved physical volumes is corrupted and all data on this
35 matches
Mail list logo