On 04/14/2013 11:16 PM, Sasha Levin wrote:
> On 04/14/2013 06:01 AM, Michael S. Tsirkin wrote:
>> On Sat, Apr 13, 2013 at 05:23:41PM -0400, Sasha Levin wrote:
>>> On 04/12/2013 07:36 AM, Rusty Russell wrote:
>>>> Sasha Levin <sasha.le...@oracle.com> writes:
>>>>> On 04/11/2013 12:36 PM, Will Deacon wrote:
>>>>>> Hello folks,
>>>>>>
>>>>>> Here's the latest round of ARM fixes and updates for kvmtool. Most of
>>>>>> this is confined to the arm/ subdirectory, with the exception of a fix
>>>>>> to the virtio-mmio vq definitions due to the multi-queue work from
>>>>>> Sasha. I'm not terribly happy about that code though, since it seriously
>>>>>> increases the memory footprint of the guest.
>>>>>>
>>>>>> Without multi-queue, we can boot Debian Wheezy to a prompt in 38MB. With
>>>>>> the new changes, that increases to 170MB! Any chance we can try and 
>>>>>> tackle
>>>>>> this regression please? I keep getting bitten by the OOM killer :(
>>>>> (cc Rusty, MST)
>>>>>
>>>>> The spec defines the operation of a virtio-net device with regards to 
>>>>> multiple
>>>>> queues as follows:
>>>>>
>>>>> """
>>>>> Device Initialization
>>>>>
>>>>>   1. The initialization routine should identify the receive and 
>>>>> transmission
>>>>> virtqueues, up to N+1 of each kind. If VIRTIO_NET_F_MQ feature
>>>>> bit is negotiated, N=max_virtqueue_pairs-1, otherwise identify N=0.
>>>>>
>>>>>   [...]
>>>>>
>>>>>   5. Only receiveq0, transmitq0 and controlq are used by default. To use 
>>>>> more
>>>>> queues driver must negotiate the VIRTIO_NET_F_MQ feature; initialize
>>>>> up to max_virtqueue_pairs of each of transmit and receive queues; execute_
>>>>> VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command specifying
>>>>> the number of the transmit and receive queues that is going to be
>>>>> used and wait until the device consumes the controlq buffer and acks this
>>>>> command.
>>>>> """
>>>>>
>>>>> And kvmtool follows that to the letter: It will initialize the maximum 
>>>>> amount of
>>>>> queues it can support during initialization and will start using them 
>>>>> only when
>>>>> the device tells it it should use them.
>>>>>
>>>>> As Will has stated, this causes a memory issue since all the data 
>>>>> structures that hold
>>>>> all possible queues get initialized regardless of whether we actually 
>>>>> need them or not,
>>>>> which is quite troublesome for systems with small RAM.
>>>>>
>>>>>
>>>>> Rusty, MST, would you be open to a spec and code change that would 
>>>>> initialize the
>>>>> RX/TX vqs on demand instead of on device initialization? Or is there an 
>>>>> easier way
>>>>> to work around this issue?
>>>> I'm confused.  kvmtool is using too much memory, or the guest?  If
>>>> kvmtool, the Device Initialization section above applies to the driver,
>>>> not the device.  If the guest, well, the language says "UP TO N+1".  You
>>>> want a small guest, don't use them all.  Or any...
>>>>
>>>> What am I missing?
>>> It's in the guest - sorry. I was only trying to say that kvmtool doesn't do 
>>> anything
>>> odd with regards to initializing virtio-net.
>>>
>>> The thing is that there should be a difference between just allowing a 
>>> larger number
>>> of queues and actually using them (i.e. enabling them with ethtool). Right 
>>> now I see
>>> the kernel lose 130MB just by having kvmtool offer 8 queue pairs, without 
>>> actually
>>> using those queues.
>>>
>>> Yes, we can make it configurable in kvmtool (and I will make it so so the 
>>> arm folks
>>> could continue working with tiny guests) but does it make sense that you 
>>> have to do
>>> this configuration in *2* places? First in the hypervisor and then inside 
>>> the guest?
>>>
>>> Memory usage should ideally depend on whether you are actually using 
>>> multiple queues,
>>> not on whether you just allow using those queues.
>>>
>>>
>>> Thanks,
>>> Sasha
>> 8 queues eat up 130MB?  Most of the memory is likely for the buffers?  I
>> think we could easily allocate these lazily as queues are enabled,
>> without protocol changes. It's harder to clean them as there's no way to
>> reset a specific queue, but maybe that' good enough for your purposes?
>>
> Yup, this is how it looks in the guest right after booting:
>
> Without virtio-net mq:
>
> # free
>              total       used       free     shared    buffers     cached
> Mem:        918112     158052     760060          0          0       4308
> -/+ buffers/cache:     153744     764368
>
> With queue pairs = 8:
>
> # free
>              total       used       free     shared    buffers     cached
> Mem:        918112     289168     628944          0          0       4244
> -/+ buffers/cache:     284924     633188
>
>
> Initializing them only when they're actually needed will do the trick here.

I don't see so much memory allocation with qemu, the main problem I
guess here is kvmtool does not support mergeable rx buffers ( does it?
). So guest must allocate 64K per packet which would be even worse in
multiqueue case. If mergeable rx buffer was used, the receive buffer
will only occupy about 256 * 4K (1M) which seems pretty acceptable here.
>
> We could also expose the facility to delete a single vq, and add a note to
> the spec saying that if the amount of actual vq pairs was reduced below what
> it was before, the now inactive queues become invalid and would need to be
> re-initialized. It's not pretty but it would let both device and driver free
> up those vqs.
>
>
> Thanks,
> Sasha
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to