On Thu, Jul 30, 2015 at 09:31:34AM +0200, Igor Mammedov wrote:
> On Thu, 30 Jul 2015 09:29:56 +0300
> "Michael S. Tsirkin" <m...@redhat.com> wrote:
> 
> > On Thu, Jul 30, 2015 at 09:25:55AM +0300, Michael S. Tsirkin wrote:
> > > On Thu, Jul 30, 2015 at 08:22:18AM +0200, Igor Mammedov wrote:
> > > > On Wed, 29 Jul 2015 18:03:36 +0300
> > > > "Michael S. Tsirkin" <m...@redhat.com> wrote:
> > > > 
> > > > > On Wed, Jul 29, 2015 at 01:49:47PM +0200, Igor Mammedov wrote:
> > > > > > v1->v2:
> > > > > >   * replace probbing with checking for
> > > > > >     /sys/module/vhost/parameters/max_mem_regions and
> > > > > >     if it's missing has non wrong value return
> > > > > >     hardcoded legacy limit (64 slots).
> > > > > > 
> > > > > > it's defensive patchset which helps to avoid QEMU crashing
> > > > > > at memory hotplug time by checking that vhost has free capacity
> > > > > > for an additional memory slot.
> > > > > 
> > > > > What if vhost is added after memory hotplug? Don't you need
> > > > > to check that as well?
> > > > vhost device can be hotplugged after memory hotplug as far as
> > > > current slots count doesn't exceed its limit,
> > > > if limit is exceeded device_add would fail or virtio device
> > > > would fallback to non vhost mode at its start-up (depends on
> > > > how particular device treats vhost_start failure).
> > > 
> > > Where exactly does it fail?
> > > memory_listener_register returns void so clearly it's not that ...
> > 
> > Oh, dev_start fails. But that's not called at device_add time.
> > And vhost-user can't fall back to anything.
> Yes, looks like it would lead to non functional vhost-user backed device
> since there isn't any error handling at that stage.
> 
> But it's would be the same without memory hotplug also, one just has to
> start QEMU with several -name memdev=xxx options to cause that condition.

Absolutely. And kvm has this problem too if using kernels before 2014.

But I have a question: do we have to figure the number of
chunks exactly? How about being blunt, and just limiting the
number of memory devices?

How about this:
        - teach memory listeners about a new "max mem devices" field
        - when registering a listener, check that # of mem devices
          does not exceed this limit, if not - fail registering
          listener
        - when adding mem device, check no existing listener
          has a limit that conflicts with it

Of course we could add a separate linked list+ register API with just this
field instead of adding it to a memory listener, if that seems
more appropriate.


> Probably the best place to add this check is at vhost_net_init()
> so that backend creation fails when one tries to add it on monitor/CLI

I'd say vhost_dev_init - it's not network specific at all.

> > 
> > 
> > > > > 
> > > > > 
> > > > > > 
> > > > > > Igor Mammedov (2):
> > > > > >   vhost: add vhost_has_free_slot() interface
> > > > > >   pc-dimm: add vhost slots limit check before commiting to hotplug
> > > > > > 
> > > > > >  hw/mem/pc-dimm.c                  |  7 +++++++
> > > > > >  hw/virtio/vhost-backend.c         | 21 ++++++++++++++++++++-
> > > > > >  hw/virtio/vhost-user.c            |  8 +++++++-
> > > > > >  hw/virtio/vhost.c                 | 21 +++++++++++++++++++++
> > > > > >  include/hw/virtio/vhost-backend.h |  2 ++
> > > > > >  include/hw/virtio/vhost.h         |  1 +
> > > > > >  stubs/Makefile.objs               |  1 +
> > > > > >  stubs/vhost.c                     |  6 ++++++
> > > > > >  8 files changed, 65 insertions(+), 2 deletions(-)
> > > > > >  create mode 100644 stubs/vhost.c
> > > > > > 
> > > > > > -- 
> > > > > > 1.8.3.1
> > 

Reply via email to