Hi,

    Thanx for your patches, there are some comments inline.

According to these comments, I propose an update base on your patch as [1].

BTW, I added a test case for this problem in dtg-tests:

https://github.com/DataTravelGuide/dtg-tests/blob/main/pcache.py.data/pcache_misc_tests/case21_gc_percent_persistence_after_recreate.sh

It update gc_percent online and recreate pcache device and check gc_percen


[1]:

diff --git a/drivers/md/dm-pcache/cache.c b/drivers/md/dm-pcache/cache.c
index 6d5001548628..4d6db733c9bd 100644
--- a/drivers/md/dm-pcache/cache.c
+++ b/drivers/md/dm-pcache/cache.c
@@ -42,12 +42,7 @@ static int cache_info_init(struct pcache_cache *cache, struct pcache_cache_optio
        if (IS_ERR(cache_info_addr))
                return PTR_ERR(cache_info_addr);

-    if (cache_info_addr) {
-               int index = ((char *)cache_info_addr - (char *)cache->cache_info_addr) /
-                               PCACHE_CACHE_INFO_SIZE;
-
-               cache->info_index = (index + 1) % PCACHE_META_INDEX_MAX;
-
+       if (cache_info_addr) {
                if (opts->data_crc !=
                                (cache->cache_info.flags & PCACHE_CACHE_FLAGS_DATA_CRC)) {                         pcache_dev_err(pcache, "invalid option for data_crc: %s, expected: %s", @@ -56,6 +51,8 @@ static int cache_info_init(struct pcache_cache *cache, struct pcache_cache_optio
                        return -EINVAL;
                }

+               cache->info_index = ((char *)cache_info_addr - (char *)cache->cache_info_addr) / PCACHE_CACHE_INFO_SIZE;
+
                return 0;
        }

diff --git a/drivers/md/dm-pcache/cache_segment.c b/drivers/md/dm-pcache/cache_segment.c
index 0b4bb08011ce..0a0016702ce7 100644
--- a/drivers/md/dm-pcache/cache_segment.c
+++ b/drivers/md/dm-pcache/cache_segment.c
@@ -60,7 +60,6 @@ static int cache_seg_info_load(struct pcache_cache_segment *cache_seg)
        cache_seg->info_index =
                ((char *)cache_seg_info_addr - (char *)cache_seg_info_addr_base) /
                PCACHE_SEG_INFO_SIZE;
-       cache_seg->info_index = (cache_seg->info_index + 1) % PCACHE_META_INDEX_MAX;
 out:
        mutex_unlock(&cache_seg->info_lock);


在 12/2/2025 8:11 PM, Li Chen 写道:
From: Li Chen <[email protected]>

dm-pcache stores metadata (cache_info and segment_info) in 4K-aligned
slots on the cache device, using sequence numbers and CRC to identify
the latest valid copy.

However, the cache_info and segment_info paths were computing their
on-media index using sizeof(struct) instead of the 4K metadata stride.
As a result:

   * cache_info updates (including gc_percent set via a dmsetup message)
     were written to invalid offsets between metadata slots and failed
     to persist across table reloads or reboots.

   * segment_info indexing became desynchronized, so rotation to the
     "next" slot no longer matched the location returned by
     pcache_meta_find_latest().

The issue can be reproduced with:

   dmsetup create pcache_vdb --table \
     "0 $(blockdev --getsz /dev/vdb) pcache /dev/pmem0 /dev/vdb 4 \
      cache_mode writeback data_crc false"

   # Check default gc_percent (70)
   dmsetup status pcache_vdb

   # Change gc_percent to 10
   dmsetup message pcache_vdb 0 "gc_percent 10"

   # Verify change is active in memory
   dmsetup status pcache_vdb

   # Reboot the system...

   # Without patch (gc_percent reverts to 70):
   dmsetup status pcache_vdb

   # With patch (gc_percent persists as 10):
   dmsetup status pcache_vdb

This series fixes the issue by deriving the metadata slot index from
the pointer returned by pcache_meta_find_latest(), using the 4K stride
(CACHE_INFO_SIZE / SEG_INFO_SIZE). This ensures that updates to
cache_info and segment_info are written to valid slots and remain
consistent with the on-media layout.

Li Chen (2):
   dm pcache: fix cache info indexing
   dm pcache: fix segment info indexing

  drivers/md/dm-pcache/cache.c         | 13 ++++++++++---
  drivers/md/dm-pcache/cache_segment.c |  6 +++++-
  2 files changed, 15 insertions(+), 4 deletions(-)


Reply via email to