On 07/19/2011 02:05 PM, Sasha Levin wrote:
On Tue, 2011-07-19 at 13:57 +0300, Avi Kivity wrote:
> On 07/19/2011 01:31 PM, Sasha Levin wrote:
> > This patch changes coalesced mmio to create one mmio device per
> > zone instead of handling all zones in one device.
> >
> > Doing so enables us to take advantage of existing locking and prevents
> > a race condition between coalesced mmio registration/unregistration
> > and lookups.
> >
> > @@ -63,7 +63,7 @@ extern struct kmem_cache *kvm_vcpu_cache;
> > */
> > struct kvm_io_bus {
> > int dev_count;
> > -#define NR_IOBUS_DEVS 200
> > +#define NR_IOBUS_DEVS 300
> > struct kvm_io_device *devs[NR_IOBUS_DEVS];
> > };
>
> This means that a lot of non-coalesced-mmio users can squeeze out
> coalesced-mmio. I don't know if it's really worthwhile, but the 100
> coalesced mmio slots should be reserved so we are guaranteed they are
> available.
We are currently registering 4 devices, plus how many
ioeventfds/coalesced mmio zones the user wants. I felt bad about upping
it to 300 really.
It's just a few kilobytes, where even a small guest occupies half a
gigabyte. Even just its pagetables swallow up megabytes.
An array means less opportunities to screw up the code and better cache
usage with small objects.
>
> >
> > @@ -95,6 +85,8 @@ static void coalesced_mmio_destructor(struct
kvm_io_device *this)
> > {
> > struct kvm_coalesced_mmio_dev *dev = to_mmio(this);
> >
> > + list_del(&dev->list);
> > +
> > kfree(dev);
> > }
> >
>
> No lock?
The lock is there to synchronize access to the coalesced ring (it was
here before this patch too, it's not something new), not the device
list.
The device list is only accessed when kvm->slots_lock is held, so it
takes care of that.
Right. A comment please.
btw, don't we leak all zones on guest destruction? the array didn't need
any cleanup, but this list does.
--
error compiling committee.c: too many arguments to function
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html