On 2017年10月06日 04:07, Matthew Rosato wrote:
On 09/25/2017 04:18 PM, Matthew Rosato wrote:
On 09/22/2017 12:03 AM, Jason Wang wrote:

On 2017年09月21日 03:38, Matthew Rosato wrote:
Seems to make some progress on wakeup mitigation. Previous patch tries
to reduce the unnecessary traversal of waitqueue during rx. Attached
patch goes even further which disables rx polling during processing tx.
Please try it to see if it has any difference.
Unfortunately, this patch doesn't seem to have made a difference.  I
tried runs with both this patch and the previous patch applied, as well
as only this patch applied for comparison (numbers from vhost thread of
sending VM):

4.12    4.13     patch1   patch2   patch1+2
2.00%   +3.69%   +2.55%   +2.81%   +2.69%   [...] __wake_up_sync_key

In each case, the regression in throughput was still present.
This probably means some other cases of the wakeups were missed. Could
you please record the callers of __wake_up_sync_key()?

Hi Jason,

With your 2 previous patches applied, every call to __wake_up_sync_key
(for both sender and server vhost threads) shows the following stack trace:

      vhost-11478-11520 [002] ....   312.927229: __wake_up_sync_key
<-sock_def_readable
      vhost-11478-11520 [002] ....   312.927230: <stack trace>
  => dev_hard_start_xmit
  => sch_direct_xmit
  => __dev_queue_xmit
  => br_dev_queue_push_xmit
  => br_forward_finish
  => __br_forward
  => br_handle_frame_finish
  => br_handle_frame
  => __netif_receive_skb_core
  => netif_receive_skb_internal
  => tun_get_user
  => tun_sendmsg
  => handle_tx
  => vhost_worker
  => kthread
  => kernel_thread_starter
  => kernel_thread_starter

Ping...  Jason, any other ideas or suggestions?


Sorry for the late, recovering from a long holiday. Will go back to this soon.

Thanks

Reply via email to