On Thu, Jan 22, 2026 at 04:40:56PM +0800, Gao Xiang wrote:
>> Having multiple folios for the same piece of memory can't work,
>> at we'd have unsynchronized state.
>
> Why not just left unsynchronized state in a unique way,
> but just left mapping + indexing seperated.

That would not just require allocating the folios dynamically, but most
importantly splitting it up.  We'd then also need to find a way to chain
the folio_link structures from the main folio.  I'm not going to see this
might not happen, but it feels very far out there and might have all
kinds of issues.

>>>> I think the concept of using a backing file of some sort for the shared
>>>> pagecache (which I have no problem with at all), vs the imprecise
>>>
>>> In that way (actually Jingbo worked that approach in 2023),
>>> we have to keep the shared data physically contiguous and
>>> even uncompressed, which cannot work for most cases.
>>
>> Why does that matter?
>
> Sorry then, I think I don't get the point, but we really
> need this for the complete page cache sharing on the
> single physical machine.

Why do you need physically contigous space to share it that way?

>>
>>> On the other side, I do think `fingerprint` from design
>>> is much like persistent NFS file handles in some aspect
>>> (but I don't want to equal to that concept, but very
>>> similar) for a single trusted domain, we should have to
>>> deal with multiple filesystem sources and mark in a
>>> unique way in a domain.
>>
>> I don't really thing they are similar in any way.
>
> Why they are not similiar, you still need persistent IDs
> in inodes for multiple fses, if there are a
> content-addressable immutable filesystems working in
> inodes, they could just use inode hashs as file handles
> instead of inode numbers + generations.

Sure, if they are well defined, cryptographically secure hashes.  But
that's different from file handles, which don't address content at all,
but are just a handle to given file that bypasses the path lookup.

>
> Thanks,
> Gao Xiang
---end quoted text---

Reply via email to