----- Original Message -----
> ----- Original Message -----
> > On 07/19/2011 02:05 PM, Sasha Levin wrote:
> > > On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> > > >  On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > > >  >  This patch changes coalesced mmio to create one mmio device
> > > >  >  per
> > > >  >  zone instead of handling all zones in one device.
> > > >  >
> > > >  >  Doing so enables us to take advantage of existing locking
> > > >  >  and
> > > >  >  prevents
> > > >  >  a race condition between coalesced mmio
> > > >  >  registration/unregistration
> > > >  >  and lookups.
> > > >  >
> > > >  >  @@ -63,7 +63,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
> > > >  >     */
> > > >  >    struct kvm_io_bus {
> > > >  >      int                   dev_count;
> > > >  >  -#define NR_IOBUS_DEVS 200
> > > >  >  +#define NR_IOBUS_DEVS 300
> > > >  >      struct kvm_io_device *devs[NR_IOBUS_DEVS];
> > > >  >    };
> > > >
> > > >  This means that a lot of non-coalesced-mmio users can squeeze
> > > >  out
> > > >  coalesced-mmio.  I don't know if it's really worthwhile, but
> > > >  the
> > > >  100
> > > >  coalesced mmio slots should be reserved so we are guaranteed
> > > >  they are
> > > >  available.
> > >
> > > We are currently registering 4 devices, plus how many
> > > ioeventfds/coalesced mmio zones the user wants. I felt bad about
> > > upping
> > > it to 300 really.
> > 
> > It's just a few kilobytes, where even a small guest occupies half a
> > gigabyte.  Even just its pagetables swallow up megabytes.
> > 
> > An array means less opportunities to screw up the code and better
> > cache
> > usage with small objects.
> 
> 
> In commit e80e2a60, it increase NR_IOBUS_DEVS because
> virtio-net(vhost=on)
> device requests 2 iobus devices.
> 
> > commit e80e2a60ff7914dae691345a976c80bbbff3ec74
> > Author: Sridhar Samudrala <s...@us.ibm.com>
> > Date:   Tue Mar 30 16:48:25 2010 -0700
> > 
> >     KVM: Increase NR_IOBUS_DEVS limit to 200
> >     
> >     This patch increases the current hardcoded limit of
> >     NR_IOBUS_DEVS
> >     from 6 to 200. We are hitting this limit when creating a guest
> >     with more
> >     than 1 virtio-net device using vhost-net backend. Each
> >     virtio-net
> >     device requires 2 such devices to service notifications from
> >     rx/tx queues.
> >     
> >     Signed-off-by: Sridhar Samudrala <s...@us.ibm.com>
> >     Signed-off-by: Avi Kivity <a...@redhat.com>
> 
> Hi all,
> 
> #define KVM_COALESCED_MMIO_ZONE_MAX 100
> Old maximum of coalesced mmio zone is 100. After apply this patch,
> each zone has an iobus devices, so the NR_IOBUS_DEVS is increased by
> 100.
> 
> > commit 8c99ce360904ba2f46d4840f1b8df7451331ea38
> > Author: Sasha Levin <levinsasha...@gmail.com>
> > Date:   Wed Jul 20 20:59:00 2011 +0300
> > 
> >     KVM: Make coalesced mmio use a device per zone
> >     
> >     This patch changes coalesced mmio to create one mmio device per
> >     zone instead of handling all zones in one device.
> >     
> >     Doing so enables us to take advantage of existing locking and
> >     prevents
> >     a race condition between coalesced mmio
> >     registration/unregistration
> >     and lookups.
> >     
> >     Suggested-by: Avi Kivity <a...@redhat.com>
> >     Signed-off-by: Sasha Levin <levinsasha...@gmail.com>
> >     Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>
> 
> 
> kvm_io_bus devices are used for ioevent, pit, pic, ioapic,
> coalesced_mmio.
> 
> virt/kvm/coalesced_mmio.c:144:  ret = kvm_io_bus_register_dev(kvm,
> KVM_MMIO_BUS, zone->addr,               -->
>  kvm_vm_ioctl_register_coalesced_mmio()
> virt/kvm/eventfd.c:589: ret = kvm_io_bus_register_dev(kvm, bus_idx,
> p->addr, p->length,                    -->  kvm_assign_ioeventfd()
> virt/kvm/ioapic.c:397:  ret = kvm_io_bus_register_dev(kvm,
> KVM_MMIO_BUS, ioapic->base_address,             -->
>  kvm_ioapic_init()
> arch/x86/kvm/i8254.c:716:       ret = kvm_io_bus_register_dev(kvm,
> KVM_PIO_BUS, KVM_PIT_BASE_ADDRESS,      -->  kvm_create_pit()
> arch/x86/kvm/i8254.c:723:               ret =
> kvm_io_bus_register_dev(kvm, KVM_PIO_BUS,
> arch/x86/kvm/i8259.c:613:       ret = kvm_io_bus_register_dev(kvm,
> KVM_PIO_BUS, 0x20, 2,                   ---> kvm_create_pic()
> arch/x86/kvm/i8259.c:618:       ret = kvm_io_bus_register_dev(kvm,
> KVM_PIO_BUS, 0xa0, 2, &s->dev_slave);
> arch/x86/kvm/i8259.c:622:       ret = kvm_io_bus_register_dev(kvm,
> KVM_PIO_BUS, 0x4d0, 2, &s->dev_eclr);
> 
> 
> Each virtio-blk device needs 1 iobus device ?
> Each virtio-net(vhost=on) device needs 2 iobus devices?
> 
> Currently Qemu only emulates one PCI bus, it contains 32 slots, one
> slot contains 8 functions.
> maximum of supported PCI devices: 1 * 32 * 8 = 256, '300' IOBUS
> devices are enough?

Hi everyone,

Any thoughts about this?

Thanks,
 Amos
 
> > > >  >
> > > >  >  @@ -95,6 +85,8 @@ static void
> > > >  >  coalesced_mmio_destructor(struct kvm_io_device *this)
> > > >  >    {
> > > >  >      struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
> > > >  >
> > > >  >  +   list_del(&dev->list);
> > > >  >  +
> > > >  >      kfree(dev);
> > > >  >    }
> > > >  >
> > > >
> > > >  No lock?
> > >
> > > The lock is there to synchronize access to the coalesced ring (it
> > > was
> > > here before this patch too, it's not something new), not the
> > > device
> > > list.
> > >
> > > The device list is only accessed when kvm->slots_lock is held, so
> > > it
> > > takes care of that.
> > 
> > Right.  A comment please.
> > 
> > btw, don't we leak all zones on guest destruction? the array didn't
> > need
> > any cleanup, but this list does.
> > 
 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to