Il 2022-06-15 11:46 Zhiyong Ye ha scritto:
I also think it meets expectations. But is there any other way to
optimize snapshot performance at the code level? Does it help to
reduce the chunksize size in the code, I see in the help documentation
that the chunksize can only be 64k minimum.

I don't think forcing the code to use smaller recordsize is a good idea. Considering the hard limit on metadata size (16 GB max), 64K chunks are good for ~16 TB thin pool - already relatively small.

A, say, 16K recordsize would be good for a 4 TB pool only, an so on. Moreover, sequential performance will significantly suffer.

I think you have to accept the performance hit on first chunck allocation & rewrite.
Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://listman.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

Reply via email to