On 2/25/26 08:20, Ackerley Tng wrote:
> __filemap_get_folio_mpol() is parametrized by a bunch of GFP flags, which
FGP?
> adds complexity for the reader. Since guest_memfd doesn't meaningfully use
> any of the other FGP flags, undo that complexity by directly calling
> filemap_alloc_folio().
>
> Directly calling filemap_alloc_folio() also allows the order of 0 to be
> explicitly specified, which is the only order guest_memfd supports. This is
> easier to understand, and removes the chance of anything else being able to
> unintentionally influence allocated folio size.
Isn't it determined by FGF_GET_ORDER() so when you pass FGP_LOCK | FGP_CREAT
and no order, it's straigtforward the order will be 0?
But if this helps with patch 4, ok.
> Signed-off-by: Ackerley Tng <[email protected]>
Acked-by: Vlastimil Babka <[email protected]>
> ---
> virt/kvm/guest_memfd.c | 51
> +++++++++++++++++++++++++++++++++++---------------
> 1 file changed, 36 insertions(+), 15 deletions(-)
>
> diff --git a/virt/kvm/guest_memfd.c b/virt/kvm/guest_memfd.c
> index 2df27b6443115..2488d7b8f2b0d 100644
> --- a/virt/kvm/guest_memfd.c
> +++ b/virt/kvm/guest_memfd.c
> @@ -107,6 +107,39 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> return __kvm_gmem_prepare_folio(kvm, slot, index, folio);
> }
>
> +static struct folio *__kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
> +{
> + /* TODO: Support huge pages. */
> + struct mempolicy *policy;
> + struct folio *folio;
> + gfp_t gfp;
> + int ret;
> +
> + /*
> + * Fast-path: See if folio is already present in mapping to avoid
> + * policy_lookup.
> + */
> + folio = filemap_lock_folio(inode->i_mapping, index);
> + if (!IS_ERR(folio))
> + return folio;
> +
> + gfp = mapping_gfp_mask(inode->i_mapping);
> +
> + policy = mpol_shared_policy_lookup(&GMEM_I(inode)->policy, index);
> + folio = filemap_alloc_folio(gfp, 0, policy);
> + mpol_cond_put(policy);
> + if (!folio)
> + return ERR_PTR(-ENOMEM);
> +
> + ret = filemap_add_folio(inode->i_mapping, folio, index, gfp);
> + if (ret) {
> + folio_put(folio);
> + return ERR_PTR(ret);
> + }
> +
> + return folio;
> +}
> +
> /*
> * Returns a locked folio on success. The caller is responsible for
> * setting the up-to-date flag before the memory is mapped into the guest.
> @@ -118,23 +151,11 @@ static int kvm_gmem_prepare_folio(struct kvm *kvm,
> struct kvm_memory_slot *slot,
> */
> static struct folio *kvm_gmem_get_folio(struct inode *inode, pgoff_t index)
> {
> - /* TODO: Support huge pages. */
> - struct mempolicy *policy;
> struct folio *folio;
>
> - /*
> - * Fast-path: See if folio is already present in mapping to avoid
> - * policy_lookup.
> - */
> - folio = filemap_lock_folio(inode->i_mapping, index);
> - if (!IS_ERR(folio))
> - return folio;
> -
> - policy = mpol_shared_policy_lookup(&GMEM_I(inode)->policy, index);
> - folio = __filemap_get_folio_mpol(inode->i_mapping, index,
> - FGP_LOCK | FGP_CREAT,
> - mapping_gfp_mask(inode->i_mapping),
> policy);
> - mpol_cond_put(policy);
> + do {
> + folio = __kvm_gmem_get_folio(inode, index);
> + } while (PTR_ERR(folio) == -EEXIST);
>
> /*
> * External interfaces like kvm_gmem_get_pfn() support dealing
>