On Tue, Dec 17, 2024 at 04:54:43PM -0500, Steven Sistare wrote:
> On 12/16/2024 1:19 PM, Peter Xu wrote:
> > On Fri, Dec 13, 2024 at 11:41:45AM -0500, Steven Sistare wrote:
> > > On 12/12/2024 4:22 PM, Peter Xu wrote:
> > > > On Thu, Dec 12, 2024 at 03:38:00PM -0500, Steven Sistare wrote:
> > > > > On 12/9/2024 2:42 PM, Peter Xu wrote:
> > > > > > On Mon, Dec 02, 2024 at 05:19:54AM -0800, Steve Sistare wrote:
> > > > > > > @@ -2089,13 +2154,23 @@ RAMBlock
> > > > > > > *qemu_ram_alloc_internal(ram_addr_t size, ram_addr_t max_size,
> > > > > > > new_block->page_size = qemu_real_host_page_size();
> > > > > > > new_block->host = host;
> > > > > > > new_block->flags = ram_flags;
> > > > > > > +
> > > > > > > + if (!host && !xen_enabled()) {
> > > > > >
> > > > > > Adding one more xen check is unnecessary. This patch needed it
> > > > > > could mean
> > > > > > that the patch can be refactored.. because we have xen checks in
> > > > > > both
> > > > > > ram_block_add() and also in the fd allocation path.
> > > > > >
> > > > > > At the meantime, see:
> > > > > >
> > > > > > qemu_ram_alloc_from_fd():
> > > > > > if (kvm_enabled() && !kvm_has_sync_mmu()) {
> > > > > > error_setg(errp,
> > > > > > "host lacks kvm mmu notifiers, -mem-path
> > > > > > unsupported");
> > > > > > return NULL;
> > > > > > }
> > > > > >
> > > > > > I don't think any decent kernel could hit this, but that could be
> > > > > > another
> > > > > > sign that this patch duplicated some file allocations.
> > > > > >
> > > > > > > + if ((new_block->flags & RAM_SHARED) &&
> > > > > > > + !qemu_ram_alloc_shared(new_block, &local_err)) {
> > > > > > > + goto err;
> > > > > > > + }
> > > > > > > + }
> > > > > > > +
> > > > > > > ram_block_add(new_block, &local_err);
> > > > > > > - if (local_err) {
> > > > > > > - g_free(new_block);
> > > > > > > - error_propagate(errp, local_err);
> > > > > > > - return NULL;
> > > > > > > + if (!local_err) {
> > > > > > > + return new_block;
> > > > > > > }
> > > > > > > - return new_block;
> > > > > > > +
> > > > > > > +err:
> > > > > > > + g_free(new_block);
> > > > > > > + error_propagate(errp, local_err);
> > > > > > > + return NULL;
> > > > > > > }
> > > > > >
> > > > > > IIUC we only need to conditionally convert an anon-allocation into
> > > > > > an
> > > > > > fd-allocation, and then we don't need to mostly duplicate
> > > > > > qemu_ram_alloc_from_fd(), instead we reuse it.
> > > > > >
> > > > > > I do have a few other comments elsewhere, but when I was trying to
> > > > > > comment.
> > > > > > E.g., we either shouldn't need to bother caching qemu_memfd_check()
> > > > > > results, or do it in qemu_memfd_check() directly.. and some more.
> > > > >
> > > > > Someone thought it a good idea to cache the result of
> > > > > qemu_memfd_alloc_check,
> > > > > and qemu_memfd_check will be called more often. I'll cache the
> > > > > result inside
> > > > > qemu_memfd_check for the special case of flags=0.
> > > >
> > > > OK.
> > > >
> > > > >
> > > > > > Then I think it's easier I provide a patch, and also show that it
> > > > > > can be
> > > > > > also smaller changes to do the same thing, with everything fixed up
> > > > > > (e.g. addressing above mmu notifier missing issue). What do you
> > > > > > think as
> > > > > > below?
> > > > >
> > > > > The key change you make is calling qemu_ram_alloc_from_fd instead of
> > > > > file_ram_alloc,
> > > > > which buys the xen and kvm checks for free. Sounds good, I will do
> > > > > that in the
> > > > > context of my patch.
> > > > >
> > > > > Here are some other changes in your patch, and my responses:
> > > > >
> > > > > I will drop the "Retrying using MAP_ANON|MAP_SHARED" message, as you
> > > > > did.
> > > > >
> > > > > However, I am keeping QEMU_VMALLOC_ALIGN, qemu_set_cloexec, and
> > > > > trace_qemu_ram_alloc_shared.
> > > >
> > > > I guess no huge deal on these, however since we're talking.. Is that
> > > > QEMU_VMALLOC_ALIGN from qemu_anon_ram_alloc()?
> > > >
> > > > A quick dig tells me that it was used to be for anon THPs..
> > > >
> > > > commit 36b586284e678da28df3af9fd0907d2b16f9311c
> > > > Author: Avi Kivity <[email protected]>
> > > > Date: Mon Sep 5 11:07:05 2011 +0300
> > > >
> > > > qemu_vmalloc: align properly for transparent hugepages and KVM
> > > >
> > > > And I'm guessing if at that time was also majorly for guest ram.
> > > >
> > > > Considering that this path won't make an effect until the new aux mem
> > > > option is on, I'd think it better to stick without anything special like
> > > > QEMU_VMALLOC_ALIGN, until it's justified to be worthwhile. E.g., Avi
> > > > used
> > > > to explicitly mention this in that commit message:
> > > >
> > > > Adjust qemu_vmalloc() to honor that requirement. Ignore it for
> > > > small regions
> > > > to avoid fragmentation.
> > > >
> > > > And this is exactly mostly small regions when it's AUX.. probably except
> > > > VGA, but it'll be SHARED on top of shmem not PRIVATE on anon anyway...
> > > > so
> > > > it'll be totally different things.
> > > >
> > > > So I won't worry on that 2M alignment, and I will try to not carry over
> > > > that, because then trying to remove it will be harder.. even when we
> > > > want.
> > >
> > > Yes, currently the aux allocations get QEMU_VMALLOC_ALIGN alignment in
> > > qemu_anon_ram_alloc. I do the same for the shared fd mappings to
> > > guarantee
> > > no performance regression,
> >
> > I don't know how we could guarantee that at all - anon and shmem uses
> > different knobs to enable/disable THPs after all.. For example:
> >
> > $ ls /sys/kernel/mm/transparent_hugepage/*enabled
> > /sys/kernel/mm/transparent_hugepage/enabled
> > /sys/kernel/mm/transparent_hugepage/shmem_enabled
>
> Yes, but at least shmem_enabled is something the end user can fix. If
> we bake a poor alignment into qemu, the user has no recourse. By setting
> it to QEMU_VMALLOC_ALIGN, I eliminate alignment as a potential performance
> issue. There is no practical downside. We should just do it, especially if
> you believe "no huge deal on these" as written above :)
I'd wager nobody will be able to notice the anon/shmem difference at all,
so if it really regressed nobody will be able fix it. :)
Not to mention it's a global knob, and IMHO it doesn't make a lot of sense
to change it for an aux mem not aligned.. while changing a global knob
could OTOH break other things.
But sure, if you do prefer having that I'm ok. Please still consider adding
a comment then explaining where it came from..
>
> > And their default values normally differ too... it means after switching to
> > fd based we do face the possibility that thp can be gone at least on the
> > 1st 2mb.
> >
> > When I was suggesting it, I was hoping thp doesn't really matter that lot
> > on aux mem, even for VGA.
> >
> > Btw, I don't even think the alignment will affect THP allocations for the
> > whole vma, anyway? I mean, it's only about the initial 2MB portion.. IOW,
> > when not aligned, I think the worst case is we have <2MB at start address
> > that is not using THP, but later on when it starts to align with 2MB, THPs
> > will be allocated again.
>
> It depends on the kernel version/implementation. In 6.13, it is not that
> clever for memfd_create + mmap. An unaligned start means no huge pages
> anywhere
> in the allocation, as shown by the page-types utility. Add
> QEMU_VMALLOC_ALIGN,
> and I get huge pages.
>
> > The challenge is more on the "fd-based" side, where shmem on most distros
> > will disable THP completely.
> >
> > > as some of them are larger than 2M and would
> > > benefit from using huge pages. The VA fragmentation is trivial for this
> > > small
> > > number of aux blocks in a 64-bit address space, and is no different than
> > > it was
> > > for qemu_anon_ram_alloc.
> > >
> > > > For the 2nd.. Any quick answer on why explicit qemu_set_cloexec()
> > > > needed?
> > >
> > > qemu sets cloexec for all descriptors it opens to prevent them from
> > > accidentally
> > > being leaked to another process via fork+exec.
> >
> > But my question is why this is special? For example, we don't do that for
> > "-object memory-backend-memfd", am I right?
>
> We should, the backends also need to set cloexec when they use a cpr fd.
> I'll delete the call here and push it into cpr_find_fd.
Maybe we already have that? As CPR receives fds from iochannels. I am
looking at qio_channel_socket_copy_fds(), where we have:
#ifndef MSG_CMSG_CLOEXEC
qemu_set_cloexec(fd);
#endif
>
> > > > For 3rd, tracepoint would definitely be fine whenever you feel
> > > > necessary.
> > > >
> > > > > Also, when qemu_memfd_create + qemu_ram_alloc_from_fd fails, qemu
> > > > > should fail and exit,
> > > > > and not fall back, because something unexpected went wrong. David
> > > > > said the same.
> > > >
> > > > Why? I was trying to rely on such fallback to make it work on e.g. Xen.
> > > > In that case, Xen fails there and fallback to xen_ram_alloc() inside the
> > > > later call to ram_block_add(), no?
> > >
> > > Why -- because something went wrong that should have worked, and we
> > > should report the
> > > first fault so its cause can be fixed and cpr can be used.
> >
> > Ahh so it's only about the corner cases where CPR could raise an error?
> > Can we rely on the failure later on "migrate" command to tell which
> > ramblock doesn't support it, so the user could be aware as well?
>
> The ramblock migration blocker will indeed tell us which block is a problem.
>
> But, we are throwing away potentially useful information by dropping the
> first error message on the floor. We should only fall back for expected
> failures. Unexpected failures mean there is something to fix.
>
> I can compromise and fail on errors from these:
> qemu_memfd_create(name, 0, 0, 0, 0, errp);
> qemu_shm_alloc(0, errp);
How are we going to be sure all existing systems using RAM_SHARED ramblocks
will always succeed on either memfd or sysv shm? IOW, what if there's a
system that can only support mmap(MAP_SHARED) but none of the two?
That's my major concern, on start failing some systems where it used to
work, even if they're corner cases.
>
> but ignore errors from the subsequent call to qemu_ram_alloc_from_fd,
> and fall back. That keeps the code simple.
>
> > > However, to do the above, but still quietly fallback if
> > > qemu_ram_alloc_from_fd
> > > fails because of xen or kvm, I would need to return different error codes
> > > from
> > > qemu_ram_alloc_from_fd. Doable, but requires tweaks to all occurrences of
> > > qemu_ram_alloc_from_fd.
> > >
> > > And BTW, qemu_ram_alloc_from_fd is defined for CONFIG_POSIX only. I need
> > > to modify the call site in the patch accordingly.
> >
> > Yep, I was thinking maybe qemu_ram_alloc_from_fd() had a stub function,
> > indeed looks not.. "allocating the fd" part definitely has, which I
> > remember I checked..
> >
> > > Overall, I am not convinced that using qemu_ram_alloc_from_fd in this
> > > patch
> > > is better/simpler than my V4 patch using file_ram_alloc, plus adding xen
> > > and
> > > kvm_has_sync_mmu checks in qemu_ram_alloc_internal.
> >
> > As long as you don't need to duplicate these two checks (or duplicate any
> > such check..) I'm ok.
> >
> > Reusing qemu_ram_alloc_from_fd() still sounds like the easiest to go. Yes
> > we'll need to teach it about resize(), used_length etc. to it, but they all
> > look sane to me. We didn't have those simply because we don't have use of
> > them, now we want to have resizable fd-based mem, that's the right thing to
> > do to support that on fd allocations.
> >
> > OTOH, duplicating xen/mmu checks isn't sane to me.. :( It will make the
> > code harder to maintain because the 3rd qemu_ram_alloc_from_fd() in the
> > future will need to duplicate it once more (or worse, forget it again until
> > xen / old kernels reports a failure)..
>
> I'll make the necessary changes to use qemu_ram_alloc_from_fd.
Thanks.
--
Peter Xu