Avi Kivity wrote:
> Anthony Liguori wrote:
>   
>>> How about the other way round: when the vlan consumer detects it can 
>>> no longer receive packets, it tells that to the vlan.  When all vlan 
>>> consumers can no longer receive, tell the producer to stop 
>>> producing.  For the tap producer, this is simply removing its fd from 
>>> the read poll list.  When a vlan consumer becomes ready to receive 
>>> again, it tells the vlan, which tells the producers, which then 
>>> install their fds back again.
>>>       
>> Yeah, that's a nice idea.   I'll think about it.  I don't know if it's 
>> really worth doing as an intermediate step though.  What I'd really 
>> like to do is have a vlan interface where consumers published all of 
>> their receive buffers.  Then there's no need for notifications of 
>> receive-ability.
>>     
>
> That's definitely better, and is also more multiqueue nic / vringfd 
> friendly.
>
> I still think interrupt-on-halfway-mark is needed much more urgently.  
> It deals with concurrency much better:
>   

We already sort of do this.  In try_fill_recv() in virtio-net.c, we try 
to allocate as many skbs as we can to fill the rx queue.  We keep track 
of how many we've been able to allocate.  Whenever we process an RX 
packet, we check to see if the current number of RX packets is less than 
half the maximum number of rx packets we've been able to allocate.

In the common case of small queues, this should have exactly the 
behavior you describe.  We don't currently suppress the RX notification 
even though we really could.  The can_receive changes are really the key 
to this.  Unless we are suppressing tap fd select()'ing, we can always 
suppress the RX notification.  That's been sitting on my TODO for a bit.

> rx:
>   host interrupts guest on halfway mark
>   guest starts processing packets
>   host keeps delivering
>
> tx:
>   guest kicks host on halfway mark
>   host starts processing packets
>   second vcpu on guest keeps on queueing
>   

I'm not convinced it's at all practical to suppress notifications in the 
front-end.  We simply don't know whether we'll get more packets which 
means we have to do TX mitigation within the front-end.  We've been 
there, it's not as nice as doing it in the back-end.

What we really ought to do in the back-end though, is start processing 
the TX packets as soon as we begin to do TX mitigation.  This would 
address the ping latency issue while also increasing throughput 
(hopefully).  One thing I've wanted to try is to register a bottom-half 
or a polling function so that the IO thread was always trying to process 
TX packets while the TX timer is active.

Regards,

Anthony Liguori

> It's also much better with multiqueue NICs, where there's no socket 
> buffer to hold the packets while we're out of descriptors.
>
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone
_______________________________________________
kvm-devel mailing list
kvm-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/kvm-devel

Reply via email to