Read and write operations issued to a dm-crypt target may be split
according to the dm-crypt internal limits defined by the max_read_size
and max_write_size module parameters (default is 128 KB). The intent is
to improve processing time of large BIOs by splitting them into smaller
operations that c
On 6/25/25 3:18 PM, Christoph Hellwig wrote:
> On Wed, Jun 25, 2025 at 03:14:51PM +0900, Damien Le Moal wrote:
>> On 6/25/25 3:12 PM, Christoph Hellwig wrote:
>>> On Wed, Jun 25, 2025 at 02:59:05PM +0900, Damien Le Moal wrote:
+bool bio_needs_zone_write_plugging(struct bio *bio)
>>>
>>> Can yo
On Wed, Jun 25, 2025 at 03:14:51PM +0900, Damien Le Moal wrote:
> On 6/25/25 3:12 PM, Christoph Hellwig wrote:
> > On Wed, Jun 25, 2025 at 02:59:05PM +0900, Damien Le Moal wrote:
> >> +bool bio_needs_zone_write_plugging(struct bio *bio)
> >
> > Can you use this in blk_zone_plug_bio instead of dupl
On 6/25/25 3:15 PM, Christoph Hellwig wrote:
> On Wed, Jun 25, 2025 at 02:59:06PM +0900, Damien Le Moal wrote:
>> Any zoned DM target that requires zone append emulation will use the
>> block layer zone write plugging. In such case, DM target drivers must
>> not split BIOs using dm_accept_partial_b
On 6/25/25 3:12 PM, Christoph Hellwig wrote:
> On Wed, Jun 25, 2025 at 02:59:05PM +0900, Damien Le Moal wrote:
>> +bool bio_needs_zone_write_plugging(struct bio *bio)
>
> Can you use this in blk_zone_plug_bio instead of duplicating the logic?
I thought about doing that, but we would still need to
Ah, I guess that addressed my comment for patch 2..
On Wed, Jun 25, 2025 at 02:59:06PM +0900, Damien Le Moal wrote:
> Any zoned DM target that requires zone append emulation will use the
> block layer zone write plugging. In such case, DM target drivers must
> not split BIOs using dm_accept_partial_bio() as doing so can potentially
> lead to deadloc
On Wed, Jun 25, 2025 at 02:59:05PM +0900, Damien Le Moal wrote:
> +bool bio_needs_zone_write_plugging(struct bio *bio)
Can you use this in blk_zone_plug_bio instead of duplicating the logic?
I also wonder if we should only it it, as despite looking quite complex
it should compile down to just a f
Jens, Mike, Mikulas,
Any zoned DM device using target drivers that internally split BIOs
using dm_accept_partial_bio() can cause deadlocks with concurrent queue
freeze operations. Furthermore, target splitting write operations used
to emulate zone append requests break the emulation. This patch se
DM targets must not split zone append and write operations using
dm_accept_partial_bio() as doing so is forbidden for zone append BIOs,
breaks zone append emulation using regular write BIOs and potentially
creates deadlock situations with queue freeze operations.
Modify dm_accept_partial_bio() to
Any zoned DM target that requires zone append emulation will use the
block layer zone write plugging. In such case, DM target drivers must
not split BIOs using dm_accept_partial_bio() as doing so can potentially
lead to deadlocks with queue freeze operations. Regular write operations
used to emulat
In preparation for fixing device mapper zone write handling, introduce
the helper function bio_needs_zone_write_plugging() to test if a BIO
requires handling through zone write plugging. This function returns
true for any write operation to a zoned block device. For zone append
opertions, true is r
On Tue, 24 Jun 2025, Damien Le Moal wrote:
> On 6/24/25 12:44 AM, Mikulas Patocka wrote:
> >
> >
> > On Mon, 23 Jun 2025, Mikulas Patocka wrote:
> >
> >> Applied, thanks.
> >>
> >> Mikulas
> >
> > Applied and reverted :)
> >
> > I've just realized that this patch won't work because the val
Cc: Hannes Reinecke
Cc: Martin Wilck
Cc: Benjamin Marzinski
Cc: Christophe Varoqui
Cc: DM-DEVEL ML
Signed-off-by: Xose Vazquez Perez
---
Missing:
util.c: maybe GPL-2.0-or-later ???
libmpathutil/util.c: * License: LGPL-2.1-or-later
libmpathutil/util.c: * Code copied from busybox (GPLv2 or lat
Introduce segment.{c,h}, an internal abstraction that encapsulates
everything related to a single pcache *segment* (the fixed-size
allocation unit stored on the cache-device).
* On-disk metadata (`struct pcache_segment_info`)
- Embedded `struct pcache_meta_header` for CRC/sequence handling.
-
Introduce cache_req.c, the high-level engine that
drives I/O requests through dm-pcache. It decides whether data is served
from the cache or fetched from the backing device, allocates new cache
space on writes, and flushes dirty ksets when required.
* Read path
- Traverses the striped RB-trees t
Add cache_dev.{c,h} to manage the persistent-memory device that stores
all pcache metadata and data segments. Splitting this logic out keeps
the main dm-pcache code focused on policy while cache_dev handles the
low-level interaction with the DAX block device.
* DAX mapping
- Opens the underlyin
Hi Mikulas,
This is V1 for dm-pcache, please take a look.
Code:
https://github.com/DataTravelGuide/linux tags/pcache_v1
Changelogs from RFC-V2:
- use crc32c to replace crc32
- only retry pcache_req when cache full, add pcache_req into defer_list,
and wait cac
Add cache.c and cache.h that introduce the top-level
“struct pcache_cache”. This object glues together the backing block
device, the persistent-memory cache device, segment array, RB-tree
indexes, and the background workers for write-back and garbage
collection.
* Persistent metadata
- pcache_ca
Add *cache_key.c* which becomes the heart of dm-pcache’s
in-memory index and on-media key-set (“kset”) format.
* Key objects (`struct pcache_cache_key`)
- Slab-backed allocator & ref-count helpers
- `cache_key_encode()/decode()` translate between in-memory keys and
their on-disk representa
Introduce cache_gc.c, a self-contained engine that reclaims cache
segments whose data have already been flushed to the backing device.
Running in the cache workqueue, the GC keeps segment usage below the
user-configurable *cache_gc_percent* threshold.
* need_gc() – decides when to trigger GC by ch
Introduce cache_writeback.c, which implements the asynchronous write-back
path for pcache. The new file is responsible for detecting dirty data,
organising it into an in-memory tree, issuing bios to the backing block
device, and advancing the cache’s *dirty tail* pointer once data has
been safely
Introduce *cache_segment.c*, the in-memory/on-disk glue that lets a
`struct pcache_cache` manage its array of data segments.
* Metadata handling
- Loads the most-recent replica of both the segment-info block
(`struct pcache_segment_info`) and per-segment generation counter
(`struct pcach
This patch introduces *backing_dev.{c,h}*, a self-contained layer that
handles all interaction with the *backing block device* where cache
write-back and cache-miss reads are serviced. Isolating this logic
keeps the core dm-pcache code free of low-level bio plumbing.
* Device setup / teardown
-
Consolidate common PCACHE helpers into a new header so that subsequent
patches can include them without repeating boiler-plate.
- Logging macros with unified prefix and location info.
- Common constants (KB/MB helpers, metadata replica count, CRC seed).
- On-disk metadata header definition and CRC
25 matches
Mail list logo