On 2019/7/19 10:36, Jason Wang wrote:
>
> On 2019/7/18 下午10:43, Michael S. Tsirkin wrote:
>> On Thu, Jul 18, 2019 at 10:42:47AM -0400, Michael S. Tsirkin wrote:
>>> On Thu, Jul 18, 2019 at 10:01:05PM +0800, Jason Wang wrote:
>>>> On 2019/7/18 下午9:04, Michael S. Tsirkin wrote:
>>>>> On Thu, Jul 18, 2019 at 12:55:50PM +0000, ? jiang wrote:
>>>>>> This change makes ring buffer reclaim threshold num_free 
>>>>>> configurable
>>>>>> for better performance, while it's hard coded as 1/2 * queue now.
>>>>>> According to our test with qemu + dpdk, packet dropping happens when
>>>>>> the guest is not able to provide free buffer in avail ring timely.
>>>>>> Smaller value of num_free does decrease the number of packet 
>>>>>> dropping
>>>>>> during our test as it makes virtio_net reclaim buffer earlier.
>>>>>>
>>>>>> At least, we should leave the value changeable to user while the
>>>>>> default value as 1/2 * queue is kept.
>>>>>>
>>>>>> Signed-off-by: jiangkidd<jiangk...@hotmail.com>
>>>>> That would be one reason, but I suspect it's not the
>>>>> true one. If you need more buffer due to jitter
>>>>> then just increase the queue size. Would be cleaner.
>>>>>
>>>>>
>>>>> However are you sure this is the reason for
>>>>> packet drops? Do you see them dropped by dpdk
>>>>> due to lack of space in the ring? As opposed to
>>>>> by guest?
>>>>>
>>>>>
>>>> Besides those, this patch depends on the user to choose a suitable 
>>>> threshold
>>>> which is not good. You need either a good value with demonstrated 
>>>> numbers or
>>>> something smarter.
>>>>
>>>> Thanks
>>> I do however think that we have a problem right now: try_fill_recv can
>>> take up a long time during which net stack does not run at all. Imagine
>>> a 1K queue - we are talking 512 packets. That's exceessive.
>
>
> Yes, we will starve a fast host in this case.
>
>
>>>    napi poll
>>> weight solves a similar problem, so it might make sense to cap this at
>>> napi_poll_weight.
>>>
>>> Which will allow tweaking it through a module parameter as a
>>> side effect :) Maybe just do NAPI_POLL_WEIGHT.
>> Or maybe NAPI_POLL_WEIGHT/2 like we do at half the queue ;). Please
>> experiment, measure performance and let the list know
>>
>>> Need to be careful though: queues can also be small and I don't 
>>> think we
>>> want to exceed queue size / 2, or maybe queue size - napi_poll_weight.
>>> Definitely must not exceed the full queue size.
>
>
> Looking at intel, it uses 16 and i40e uses 32.  It looks to me 
> NAPI_POLL_WEIGHT/2 is better.
>
> Jiang, want to try that and post a new patch?
>
> Thanks
>
>
>>>
>>> -- 
>>> MST

We did have completed several rounds of test with setting the value to 
budget (64 as the default value). It does improve a lot with pps is 
below 400pps for a single stream. Let me consolidate the data and will 
send it soon. Actually, we are confident that it runs out of free buffer 
in avail ring when packet dropping happens with below systemtap:

Just a snippet:

probe module("virtio_ring").function("virtqueue_get_buf")
{
     x = (@cast($_vq, "vring_virtqueue")->vring->used->idx)- 
(@cast($_vq, "vring_virtqueue")->last_used_idx) ---> we use this one to 
verify if the queue is full, which means guest is not able to take 
buffer from the queue timely

     if (x<0 && (x+65535)<4096)
         x = x+65535

     if((x==1024) && @cast($_vq, "vring_virtqueue")->vq->callback == 
callback_addr)
         netrxcount[x] <<< gettimeofday_s()
}


probe module("virtio_ring").function("virtqueue_add_inbuf")
{
     y = (@cast($vq, "vring_virtqueue")->vring->avail->idx)- (@cast($vq, 
"vring_virtqueue")->vring->used->idx) ---> we use this one to verify if 
we run out of free buffer in avail ring
     if (y<0 && (y+65535)<4096)
         y = y+65535

     if(@2=="debugon")
     {
         if(y==0 && @cast($vq, "vring_virtqueue")->vq->callback == 
callback_addr)
         {
             netrxfreecount[y] <<< gettimeofday_s()

             printf("no avail ring left seen, printing most recent 5 num 
free, vq: %lx, current index: %d\n", $vq, recentfreecount)
             for(i=recentfreecount; i!=((recentfreecount+4) % 5); 
i=((i+1) % 5))
             {
                 printf("index: %d, num free: %d\n", i, recentfree[$vq, i])
             }

             printf("index: %d, num free: %d\n", i, recentfree[$vq, i])
             //exit()
         }
     }
}


probe 
module("virtio_net").statement("virtnet_receive@drivers/net/virtio_net.c:732")
{
     recentfreecount++
     recentfreecount = recentfreecount % 5
     recentfree[$rq->vq, recentfreecount] = $rq->vq->num_free ---> 
record the num_free for the last 5 calls to virtnet_receive, so we can 
see if lowering the bar helps.
}


Here is the result:

no avail ring left seen, printing most recent 5 num free, vq: 
ffff9c13c1200000, current index: 1
index: 1, num free: 561
index: 2, num free: 305
index: 3, num free: 369
index: 4, num free: 433
index: 0, num free: 497
no avail ring left seen, printing most recent 5 num free, vq: 
ffff9c13c1200000, current index: 1
index: 1, num free: 543
index: 2, num free: 463
index: 3, num free: 469
index: 4, num free: 476
index: 0, num free: 479
no avail ring left seen, printing most recent 5 num free, vq: 
ffff9c13c1200000, current index: 2
index: 2, num free: 555
index: 3, num free: 414
index: 4, num free: 420
index: 0, num free: 427
index: 1, num free: 491

You can see in the last 4 calls to virtnet_receive before we run out of 
free buffer and start to relaim, num_free is quite high. So if we can do 
the reclaim earlier, it will certainly help.

Meanwhile, the patch I proposed actually keeps the default value as 1/2 
* queue. So the default behavior remains and only leave the interface to 
advanced users, who really understands what they are doing. Also, the 
best value may vary in different environment. Do you still think 
hardcoding this is better option?


Jiang


_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to