[linux-lvm] Search in the list

2020-11-27 Thread Ilia Zykov
Hello.
Please, tell me if there is a way to search the archive of this list?
https://www.redhat.com/archives/linux-lvm/index.html
Thanks.




___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



Re: [linux-lvm] exposing snapshot block device

2019-10-24 Thread Ilia Zykov
On 23.10.2019 14:08, Gionatan Danti wrote:
> 
> For example, consider a completely filled 64k chunk thin volume (with
> thinpool having ample free space). Snapshotting it and writing a 4k
> block on origin will obviously cause a read of the original 64k chunk,
> an in-memory change of the 4k block and a write of the entire modified
> 64k block to a new location. But writing, say, a 1 MB block should *not*
> cause the same read on source: after all, the read data will be
> immediately discarded, overwritten by the changed 1 MB block.
> 
> However, my testing shows that source chunks are always read, even when
> completely overwritten.

Not only read but sometimes write.
I watched it without snapshot. Only zeroing was enabled. Before wrote
new chunks "dd bs=1048576 ..." chunks were zeroed. But for security it's
good. IMHO: In this case best choice firstly write chunks to the disk
and then give this chunks to the volume.

> 
> Am I missing something?
> 





smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] exposing snapshot block device

2019-10-23 Thread Ilia Zykov
On 23.10.2019 17:40, Gionatan Danti wrote:
> On 23/10/19 15:05, Zdenek Kabelac wrote:
>> Yep - we are recommending to disable zeroing as soon as chunksize >512K.
>>
>> But for 'security' reason the option it's up to users to select what
>> fits the needs in the best way - there is no  'one solution fits them
>> all' in this case.
> 
> Sure, but again: if writing a block larger than the underlying chunk,
> zeroing can (and should) skipped. Yet I seem to remember that the new

At this case if we get reset before a full chunk written, the tail of
the chunk will be a foreign old data (if meta data already written) -
little security problem.
We need firstly write a data to the disk and then give the fully written
chunk to the volume. But I think it's 'little' complicate matters.

> block is zeroed in any case, even if it is going to be rewritten entirely.
> 
> Do I remember wrongly?
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] exposing snapshot block device

2019-10-23 Thread Ilia Zykov
On 23.10.2019 14:08, Gionatan Danti wrote:
> 
> For example, consider a completely filled 64k chunk thin volume (with
> thinpool having ample free space). Snapshotting it and writing a 4k
> block on origin will obviously cause a read of the original 64k chunk,
> an in-memory change of the 4k block and a write of the entire modified
> 64k block to a new location. But writing, say, a 1 MB block should *not*
> cause the same read on source: after all, the read data will be
> immediately discarded, overwritten by the changed 1 MB block.
> 
> However, my testing shows that source chunks are always read, even when
> completely overwritten.

Not only read but sometimes write.
I watched it without snapshot. Only zeroing was enabled. Before wrote
new chunks "dd bs=1048576 ..." chunks were zeroed. But for security it's
good. IMHO: In this case good choice firstly write chunks to the disk
and then give this chunks to the volume.
> 
> Am I missing something?
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] pvresize will cause a meta-data corruption with error message "Error writing device at 4096 length 512"

2019-09-11 Thread Ilia Zykov
Maybe this?

Please note that this problem can also happen in other cases, such as
mixing disks with different block sizes (e.g. SCSI disks with 512 bytes
and s390x-DASDs with 4096 block size).

https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html



On 11.09.2019 12:17, Gang He wrote:
> Hello List,
> 
> Our user encountered a meta-data corruption problem, when run pvresize 
> command after upgrading to LVM2 v2.02.180 from v2.02.120.
> 
> The details are as below,
> we have following environment:
> - Storage: HP XP7 (SAN) - LUN's are presented to ESX via RDM
> - VMWare ESXi 6.5
> - SLES 12 SP 4 Guest
> 
> Resize happened this way (is our standard way since years) - however - this 
> is our first resize after upgrading SLES 12 SP3 to SLES 12 SP4 - until this 
> upgrade, we
> never had a problem like this:
> - split continous access on storage box, resize lun on XP7
> - recreate ca on XP7
> - scan on ESX
> - rescan-scsi-bus.sh -s on SLES VM
> - pvresize  ( at this step the error happened)
> 
> huns1vdb01:~ # pvresize /dev/disk/by-id/scsi-360060e80072a66302a663274
>  Error writing device /dev/sdaf at 4096 length 512.
>  Failed to write mda header to /dev/sdaf fd -1
>  Failed to update old PV extension headers in VG vghundbhulv_ar.
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631ec 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a6631ec fd -1
>  Failed to update old PV extension headers in VG vghundbhulk_ar.
>  VG info not found after rescan of vghundbhulv_r2
>  VG info not found after rescan of vghundbhula_r1
>  VG info not found after rescan of vghundbhuco_ar
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631e8 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a6631e8 fd -1
>  Failed to update old PV extension headers in VG vghundbhula_ar.
>  VG info not found after rescan of vghundbhuco_r2
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a66300b 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a66300b fd -1
>  Failed to update old PV extension headers in VG vghundbhunrm02_r2.
> 
> Any idea for this bug?
> 
> Thanks a lot.
> Gang
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] pvresize will cause a meta-data corruption with error message "Error writing device at 4096 length 512"

2019-09-11 Thread Ilia Zykov
Maybe this?

Please note that this problem can also happen in other cases, such as
mixing disks with different block sizes (e.g. SCSI disks with 512 bytes
and s390x-DASDs with 4096 block size).

https://www.redhat.com/archives/linux-lvm/2019-February/msg00018.html


On 11.09.2019 12:17, Gang He wrote:
> Hello List,
> 
> Our user encountered a meta-data corruption problem, when run pvresize 
> command after upgrading to LVM2 v2.02.180 from v2.02.120.
> 
> The details are as below,
> we have following environment:
> - Storage: HP XP7 (SAN) - LUN's are presented to ESX via RDM
> - VMWare ESXi 6.5
> - SLES 12 SP 4 Guest
> 
> Resize happened this way (is our standard way since years) - however - this 
> is our first resize after upgrading SLES 12 SP3 to SLES 12 SP4 - until this 
> upgrade, we
> never had a problem like this:
> - split continous access on storage box, resize lun on XP7
> - recreate ca on XP7
> - scan on ESX
> - rescan-scsi-bus.sh -s on SLES VM
> - pvresize  ( at this step the error happened)
> 
> huns1vdb01:~ # pvresize /dev/disk/by-id/scsi-360060e80072a66302a663274
>  Error writing device /dev/sdaf at 4096 length 512.
>  Failed to write mda header to /dev/sdaf fd -1
>  Failed to update old PV extension headers in VG vghundbhulv_ar.
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631ec 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a6631ec fd -1
>  Failed to update old PV extension headers in VG vghundbhulk_ar.
>  VG info not found after rescan of vghundbhulv_r2
>  VG info not found after rescan of vghundbhula_r1
>  VG info not found after rescan of vghundbhuco_ar
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a6631e8 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a6631e8 fd -1
>  Failed to update old PV extension headers in VG vghundbhula_ar.
>  VG info not found after rescan of vghundbhuco_r2
>  Error writing device /dev/disk/by-id/scsi-360060e80072a66302a66300b 
> at 4096 length 512.
>  Failed to write mda header to 
> /dev/disk/by-id/scsi-360060e80072a66302a66300b fd -1
>  Failed to update old PV extension headers in VG vghundbhunrm02_r2.
> 
> Any idea for this bug?
> 
> Thanks a lot.
> Gang
> 
> 
> ___
> linux-lvm mailing list
> linux-lvm@redhat.com
> https://www.redhat.com/mailman/listinfo/linux-lvm
> read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/
> 




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] Maximum address used by a virtual disk on a thin pool.

2019-06-13 Thread Ilia Zykov
Hello.
Tell me please, how can I get the maximum address used by a virtual disk
(disk created with -V VirtualSize). I have several large virtual disks,
but they use only a small part at the beginning of the disk. For example:

# lvs
  LV VG  Attr   LSize   Pool Origin Data%
  mylvm  CVG Vwi-aot--- 100,00g fastheap7,13


Please advise me how to determine from which address the virtual disk
did not allocate real disk space? And all the data read from addresses
greater than this address will be exactly zeros. Or maybe, how can I get
map of the used chunks of disk?

Thanks.



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Fast thin volume preallocation?

2019-06-05 Thread Ilia Zykov
> 
>> Presumably you want a thick volume but inside a thin pool so that you
>> can used snapshots?
>> If so have you considered the 'external snapshot' feature?
> 
> Yes, in some cases they are quite useful. Still, a fast volume
> allocation can be an handy addition.
> 

Hello.
Can I use external snapshot for fast zero allocation?
 "thpool" - is lvmthin with lvm zeroing disabled

# lvcreate -n ext2T -V 2TiB --thinpool thpool VG
# lvchange --permission r VG/ext2T

# lvcreate -n zeroed_lve -s VG/ext2T --thinpool VG/thpool

Or it will be the same as zeroing enabled?
Thanks.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Fast thin volume preallocation?

2019-06-03 Thread Ilia Zykov
>> Presumably you want a thick volume but inside a thin pool so that you
>> can used snapshots?
>> If so have you considered the 'external snapshot' feature?
> 
> Yes, in some cases they are quite useful. Still, a fast volume
> allocation can be an handy addition.
> 

Hello.
Can I use external snapshot for fast zero allocation?
 "thpool" - is lvmthin with lvm zeroing disabled

# lvcreate -n ext2T -V 2TiB --thinpool thpool VG
# lvchange --permission r VG/ext2T

# lvcreate -n zeroed_lve -s VG/ext2T --thinpool VG/thpool

Or it will be the same as zeroing enabled?
Thanks.



___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/


Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-03-05 Thread Ilia Zykov
Hello.

>> THAT is a crucial observation.  It's not an LVM bug, but the filesystem
>> trying to read 1024 bytes on a 4096 device.  
> Yes that's probably the reason. Nevertheless, its not really the FS's fault, 
> since it was moved by LVM to a 4069 device.
> The FS does not know anything about the move, so it reads in the block size 
> it was created with (1024 in this case).
> 
> I still think LVM should prevent one from mixing devices with different 
> physical block sizes, or at least warn when pvmoving or lvextending onto a PV 
> with a larger block size, since this can cause trouble.
> 

In this case, "dd" tool and others should prevent too.

Because after:

dd if=/dev/DiskWith512block bs=4096 of=/dev/DiskWith4Kblock

You couldn't mount the "/dev/DiskWith4Kblock" with the same error ;)
/dev/DiskWith512block has ext4 fs with 1k block.

P.S.
LVM,dd .. are low level tools and doesn't know about hi level anything.
And in the your case and others cases can't know. You should test(if you
need) the block size with other tools before moving or copying.
Not a lvm bug.
Thank you.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-03-03 Thread Ilia Zykov
>>
>>  smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size:  512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor:  3.5 inches
>> 4096
>> 512
>>
>> As you see “–getbsz” forever 4096.
> I also see logical block size to be 4096 for all devices on my system.
>> But I think it must be forever 512.
>> What does it mean?
> I have seen the following description about logical and physical block sizes 
> somewhere in the internet:
> "Logical block sizes are the units used by the 'kernel' for read/write 
> operations.

Kernel can but usually does not want, because reduce performance.

> Physical block sizes are the units which 'disk controllers' use for 
> read/write operations."

Not disk controller on the motherboard, but controller inside disk. We
don't have access to it.

> 
> For the problem mentioned in this thread, the physical block size is what you 
> are looking for.
>>

I think it is BUG in the "blockdev".
My question was:

Can this error(or similar) be related to a problem in pvmove?



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-02-28 Thread Ilia Zykov
> At the time the file system was created (possibly may years ago), I did not 
> know that I would ever move it to a device with a larger block size.
>

For this purpose all 4k disks have logical sector size 512.
Don't look at "blockdev --getbsz" it's not property of physical(real)
device.





smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-02-28 Thread Ilia Zykov
> Discarding device blocks: done
> Creating filesystem with 307200 1k blocks and 76912 inodes
> ..
> # pvs
>   /dev/LOOP_VG/LV: read failed after 0 of 1024 at 0: Invalid argument
>   /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314507264: Invalid argument
>   /dev/LOOP_VG/LV: read failed after 0 of 1024 at 314564608: Invalid argument
>   /dev/LOOP_VG/LV: read failed after 0 of 1024 at 4096: Invalid argument
>   PV   VG  Fmt  Attr PSize   PFree
>   /dev/loop0   LOOP_VG lvm2 a--  496.00m 496.00m
>   /dev/mapper/enc-loop LOOP_VG lvm2 a--  492.00m 192.00m
> 
> In case the filesystem of the logical volume is not mounted at the time of 
> pvmove, it gets corrupted anyway, but you only see errors when trying to 
> mount it.
> 

It's because you FS had 1k blocks.
New device can't read with block 1k.
If you plan pvmove on device with 4k. Maybe you need make fs with:
"mkfs.ext4 -b 4096"

See comments:
https://bugzilla.redhat.com/show_bug.cgi?id=1684078



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-02-28 Thread Ilia Zykov
>>
>>  smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
>> Device Model: HGST HUS722T2TALA604
>> User Capacity:2,000,398,934,016 bytes [2.00 TB]
>> Sector Size:  512 bytes logical/physical
>> Rotation Rate:7200 rpm
>> Form Factor:  3.5 inches
>> 4096
>> 512
>>
>> As you see “–getbsz” forever 4096.
> I also see logical block size to be 4096 for all devices on my system.
>> But I think it must be forever 512.
>> What does it mean?
> I have seen the following description about logical and physical block sizes 
> somewhere in the internet:
> "Logical block sizes are the units used by the 'kernel' for read/write 
> operations.

Kernel can but usually does not want, because reduce performance.

> Physical block sizes are the units which 'disk controllers' use for 
> read/write operations."

Not disk controller on the motherboard, but controller inside disk. We
don't have access to it.

> 
> For the problem mentioned in this thread, the physical block size is what you 
> are looking for.
>>

I think it is BUG in the "blockdev(util-linux)".
My question was:

Can this error(or similar) be related to a problem in pvmove?




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Filesystem corruption with LVM's pvmove onto a PV with a larger physical block size

2019-02-28 Thread Ilia Zykov
> 
> Well, there are the following 2 commands:
> 
> Get physical block size: 
>  blockdev --getpbsz 
> Get logical block size:
>  blockdev --getbsz 
> 
> Filesystems seem to care about the physical block size only, not the logical 
> block size.
> 
> So as soon as you have PVs with different physical block sizes (as reported 
> by blockdev --getpbsz) I would be very careful...

Hello everybody.
Maybe, I don’t understand what do you mean. What the logical block size
mean? But on my machines(CentOS7), this utility get me the strange
results (output reduced):

 smartctl -i /dev/sda; blockdev --getbsz --getpbsz /dev/sda
Device Model: INTEL SSDSC2KB480G8
User Capacity:480,103,981,056 bytes [480 GB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Rotation Rate:Solid State Device
4096
4096

 smartctl -i /dev/sdb; blockdev --getbsz --getpbsz /dev/sdb
Device Model: HGST HUS722T2TALA604
User Capacity:2,000,398,934,016 bytes [2.00 TB]
Sector Size:  512 bytes logical/physical
Rotation Rate:7200 rpm
Form Factor:  3.5 inches
4096
512

As you see “–getbsz” forever 4096.
But I think it must be forever 512.
What does it mean?

Thank you.
Ilia.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov
> 
> dm-writecache could be seen as 'extension' of your page-cache to held
> longer list of dirty-pages...
> 
> Zdenek
> 

Does it mean that the dm-writecache is always empty, after reboot?
Thanks.




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov


On 19.10.2018 16:08, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 14:45 Gionatan Danti napsal(a):
>> On 19/10/2018 12:58, Zdenek Kabelac wrote:
>>> Hi
>>>
>>> Writecache simply doesn't care about caching your reads at all.
>>> Your RAM with it's page caching mechanism keeps read data as long as
>>> there is free RAM for this - the less RAM goes to page cache - less
>>> read operations remains cached.
>>
>> Hi, does it mean that to have *both* fast write cache *and* read cache
>> one should use a dm-writeback target + a dm-cache writethrough target
>> (possibly pointing to different devices)?
>>
>> Can you quantify/explain why and how faster is dm-writeback for heavy
>> write workload?
> 
> 
> 
> Hi
> 
> It's rather about different workload takes benefit from different
> caching approaches.
> 
> If your system is heavy on writes -  dm-writecache is what you want,
> if you mostly reads - dm-cache will win.
> 
> That's why there is  dmstats to also help identify hotspots and overal
> logic.
> There is nothing to win always in all cases - so ATM 2 different targets
> are provided -  NVDIMMs already seems to change game a lot...
> 
> dm-writecache could be seen as 'extension' of your page-cache to held
> longer list of dirty-pages...
> 
> Zdenek

Sorry, but I don't understand too. What be if reboot happens between data 
writes from the fast cache to the slow device? After reboot what data will be 
reads? A new data from the fast cache or an old from the slow device? And what 
data will be read 'dd if=/dev/cached iflag=direct'? 
Thanks.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Re: [linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-19 Thread Ilia Zykov


On 19.10.2018 12:12, Zdenek Kabelac wrote:
> Dne 19. 10. 18 v 0:56 Ilia Zykov napsal(a):
>> Maybe it will be implemented later? But it seems to me a little
>> strange when there is no way to clear the cache from a garbage.
>> Maybe I do not understand? Can you please explain this behavior.
>> For example:
> 
> Hi
> 
> Applying my brain logic here:
> 
> Cache (by default) operates on 32KB chunks.
> SSD (usually) have the minimal size of trimable block as 512KB.
> 
> Conclusion can be there is non-trivial to even implement TRIM support
> for cache - as something would need to keep a secondary data structure
> which would keep the information about which all cached blocks are
> completely 'unused/trimmed' and available from a 'complete block trim'
> (i.e. something like when ext4  implements 'fstrim' support.)
> 
> Second thought -  if there is a wish to completely 'erase' cache - there
> is very simple path by using 'lvconvert --uncache' - and once the cache
> is needed again, create cache again from scratch.
> 
> Note - dm-cache is SLOW moving cache - so it doesn't target acceleration
> one-time usage - i.e. if you read block just once from slow storage - it
> doesn't mean it will be immediately cached.
> 
> Dm-cache is about keeping info about used blocks on 'slow' storage (hdd)
> which typically does not support/implemnent TRIM. There could be
> possibly a multi-layer cache, where even the cached device can handle
> TRIM - but this kind on construct is not really support and it's even
> unclear if it would make any sense to introduce this concept ATM  (since
> there would need to be some well measurable benefit).
> 
> And final note - there is upcoming support for accelerating writes with
> new dm-writecache target.
> 
> Regards
> 
> 
> Zdenek
> 

Thank you, I supposed it is so.
One more little question about dm-writecache:
The description says that:

"It doesn't cache reads because reads are supposed to be cached in page cache
in normal RAM."

Is it only mean, missing reads not promoted to the cache?




smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[linux-lvm] Why doesn't the lvmcache support the discard (trim) command?

2018-10-18 Thread Ilia Zykov
Maybe it will be implemented later? But it seems to me a little strange when 
there is no way to clear the cache from a garbage.
Maybe I do not understand? Can you please explain this behavior.
For example:

I have the full cached partition, with full cache:

[root@localhost ~]# df -h /data
Filesystem Size  Used Avail Use% Mounted on
/dev/mapper/test-data   12G   12G 0 100% /data
[root@localhost ~]# lvs -a
  LV  VG Attr   LSize  Pool   Origin   Data%  Meta%  
Move Log Cpy%Sync Convert
  datatest   Cwi-aoC--- 12.00g [fast] [data_corig] 99.72  2.93  
  0.00
  [data_corig]test   owi-aoC--- 12.00g
  [fast]  test   Cwi---C---  1.00g 99.72  2.93  
  0.00
  [fast_cdata]test   Cwi-ao  1.00g
  [fast_cmeta]test   ewi-ao  8.00m
  [lvol0_pmspare] test   ewi---  8.00m

I clear the partition and do fstrim:

[root@localhost ~]# rm -rf /data/*
[root@localhost ~]# fstrim -v /data
[root@localhost ~]# df -h /data
Filesystem Size  Used Avail Use% Mounted on
/dev/mapper/test-data   12G   41M   12G   1% /data


But the cache remained full:

[root@localhost ~]# lvs -a
  LV  VG Attr   LSize  Pool   Origin   Data%  Meta%  
Move Log Cpy%Sync Convert
  datatest   Cwi-aoC--- 12.00g [fast] [data_corig] 99.72  2.93  
  0.00
  [data_corig]test   owi-aoC--- 12.00g
  [fast]  test   Cwi---C---  1.00g 99.72  2.93  
  0.00
  [fast_cdata]test   Cwi-ao  1.00g
  [fast_cmeta]test   ewi-ao  8.00m
  [lvol0_pmspare] test   ewi---  8.00m

Thank you.



smime.p7s
Description: S/MIME Cryptographic Signature
___
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/