On Thu, 21 Mar 2024 at 08:22, Kevin Wolf <kw...@redhat.com> wrote:
>
> Am 20.03.2024 um 15:09 hat Daniel P. Berrangé geschrieben:
> > On Wed, Mar 20, 2024 at 09:35:39AM -0400, Stefan Hajnoczi wrote:
> > > On Tue, Mar 19, 2024 at 08:10:49PM +0000, Daniel P. Berrangé wrote:
> > > > On Tue, Mar 19, 2024 at 01:55:10PM -0400, Stefan Hajnoczi wrote:
> > > > > On Tue, Mar 19, 2024 at 01:43:32PM +0000, Daniel P. Berrangé wrote:
> > > > > > On Mon, Mar 18, 2024 at 02:34:29PM -0400, Stefan Hajnoczi wrote:
> > > > > > > diff --git a/util/qemu-coroutine.c b/util/qemu-coroutine.c
> > > > > > > index 5fd2dbaf8b..2790959eaf 100644
> > > > > > > --- a/util/qemu-coroutine.c
> > > > > > > +++ b/util/qemu-coroutine.c
> > > > > >
> > > > > > > +static unsigned int get_global_pool_hard_max_size(void)
> > > > > > > +{
> > > > > > > +#ifdef __linux__
> > > > > > > +    g_autofree char *contents = NULL;
> > > > > > > +    int max_map_count;
> > > > > > > +
> > > > > > > +    /*
> > > > > > > +     * Linux processes can have up to max_map_count virtual 
> > > > > > > memory areas
> > > > > > > +     * (VMAs). mmap(2), mprotect(2), etc fail with ENOMEM beyond 
> > > > > > > this limit. We
> > > > > > > +     * must limit the coroutine pool to a safe size to avoid 
> > > > > > > running out of
> > > > > > > +     * VMAs.
> > > > > > > +     */
> > > > > > > +    if (g_file_get_contents("/proc/sys/vm/max_map_count", 
> > > > > > > &contents, NULL,
> > > > > > > +                            NULL) &&
> > > > > > > +        qemu_strtoi(contents, NULL, 10, &max_map_count) == 0) {
> > > > > > > +        /*
> > > > > > > +         * This is a conservative upper bound that avoids 
> > > > > > > exceeding
> > > > > > > +         * max_map_count. Leave half for non-coroutine users 
> > > > > > > like library
> > > > > > > +         * dependencies, vhost-user, etc. Each coroutine takes 
> > > > > > > up 2 VMAs so
> > > > > > > +         * halve the amount again.
> > > >
> > > > Leaving half for loaded libraries, etc is quite conservative
> > > > if max_map_count is the small-ish 64k default.
> > > >
> > > > That reservation could perhaps a fixed number like 5,000 ?
> > >
> > > While I don't want QEMU to abort, once this heuristic is in the code it
> > > will be scary to make it more optimistic and we may never change it. So
> > > now is the best time to try 5,000.
> > >
> > > I'll send a follow-up patch that reserves 5,000 mappings. If that turns
> > > out to be too optimistic we can increase the reservation.
> >
> > BTW, I suggested 5,000, because I looked at a few QEM processes I have
> > running on Fedora and saw just under 1,000 lines in /proc/$PID/maps,
> > of which only a subset is library mappings. So multiplying that x5 felt
> > like a fairly generous overhead for more complex build configurations.
>
> On my system, the boring desktop VM with no special hardware or other
> advanced configuration takes ~1500 mappings, most of which are
> libraries. I'm not concerned about the library mappings, it's unlikely
> that we'll double the number of libraries soon.
>
> But I'm not sure about dynamic mappings outside of coroutines, maybe
> when enabling features my simple desktop VM doesn't even use at all. If
> we're sure that nothing else uses any number worth mentioning, fine with
> me. But I couldn't tell.
>
> Staying the area we know reasonably well, how many libblkio bounce
> buffers could be in use at the same time? I think each one is an
> individual mmap(), right?

libblkio's mapping requirements are similar to vhost-user. There is
one general-purpose bounce buffer mapping plus a mapping for each QEMU
RAMBlock. That means the number of in-flight I/Os does not directly
influence the number of mappings.

Stefan

Reply via email to