On Tue, Jul 07, 2015 at 08:53:59AM +0800, Fam Zheng wrote:
> On Mon, 07/06 20:09, Michael S. Tsirkin wrote:
> > On Mon, Jul 06, 2015 at 04:21:16PM +0100, Stefan Hajnoczi wrote:
> > > On Mon, Jul 06, 2015 at 11:32:25AM +0800, Jason Wang wrote:
> > > > 
> > > > 
> > > > On 07/02/2015 08:46 PM, Stefan Hajnoczi wrote:
> > > > > On Tue, Jun 30, 2015 at 04:35:24PM +0800, Jason Wang wrote:
> > > > >> On 06/30/2015 11:06 AM, Fam Zheng wrote:
> > > > >>> virtio_net_receive still does the check by calling
> > > > >>> virtio_net_can_receive, if the device or driver is not ready, the 
> > > > >>> packet
> > > > >>> is dropped.
> > > > >>>
> > > > >>> This is necessary because returning false from can_receive 
> > > > >>> complicates
> > > > >>> things: the peer would disable sending until we explicitly flush the
> > > > >>> queue.
> > > > >>>
> > > > >>> Signed-off-by: Fam Zheng <f...@redhat.com>
> > > > >>> ---
> > > > >>>  hw/net/virtio-net.c | 1 -
> > > > >>>  1 file changed, 1 deletion(-)
> > > > >>>
> > > > >>> diff --git a/hw/net/virtio-net.c b/hw/net/virtio-net.c
> > > > >>> index d728233..dbef0d0 100644
> > > > >>> --- a/hw/net/virtio-net.c
> > > > >>> +++ b/hw/net/virtio-net.c
> > > > >>> @@ -1503,7 +1503,6 @@ static int 
> > > > >>> virtio_net_load_device(VirtIODevice *vdev, QEMUFile *f,
> > > > >>>  static NetClientInfo net_virtio_info = {
> > > > >>>      .type = NET_CLIENT_OPTIONS_KIND_NIC,
> > > > >>>      .size = sizeof(NICState),
> > > > >>> -    .can_receive = virtio_net_can_receive,
> > > > >>>      .receive = virtio_net_receive,
> > > > >>>      .link_status_changed = virtio_net_set_link_status,
> > > > >>>      .query_rx_filter = virtio_net_query_rxfilter,
> > > > >> A side effect of this patch is it will read and then drop packet is
> > > > >> guest driver is no ok.
> > > > > I think that the semantics of .can_receive() and .receive() return
> > > > > values are currently incorrect in many NICs.  They have .can_receive()
> > > > > functions that return false for conditions where .receive() would
> > > > > discard the packet.  So what happens is that packets get queued when
> > > > > they should actually be discarded.
> > > > 
> > > > Yes, but they are bugs more or less.
> > > > 
> > > > >
> > > > > The purpose of the flow control (queuing) mechanism is to tell the
> > > > > sender to hold off until the receiver has more rx buffers available.
> > > > > It's a short-term thing that doesn't included link down, rx disable, 
> > > > > or
> > > > > NIC reset states.
> > > > >
> > > > > Therefore, I think this patch will not introduce a regression.  It is
> > > > > adjusting the code to stop queuing packets when they should actually 
> > > > > be
> > > > > dropped.
> > > > >
> > > > > Thoughts?
> > > > 
> > > > I agree there's no functional issue. But it cause wasting of cpu cycles
> > > > (consider guest is being flooded). Sometime it maybe even dangerous. For
> > > > tap, we're probably ok since we have 756ae78b but for other backend, we
> > > > don't.
> > > 
> > > If the guest uses iptables rules or other mechanisms to drop bogus
> > > packets the cost is even higher than discarding them at the QEMU layer.
> > > 
> > > What's more is that if you're using link down as a DoS mitigation
> > > strategy then you might as well hot unplug the NIC.
> > > 
> > > Stefan
> > 
> > 
> > 
> > Frankly, I don't see the point of the patch.  Is this supposed to be a
> > bugfix? If so, there's should be a description about how to trigger the
> > bug.  Is this an optimization? If so there should be some numbers
> > showing a gain.
> 
> It's a bug fix, we are not flushing the queue when DIRVER_OK is being set or
> when buffer is becoming available (the virtio_net_can_receive conditions). Not
> an issue before a90a7425cf but since that the semantics is enforced.
> 
> Fam

I think the safest and obvious fix is to flush on DRIVER_OK then (unless
vhost started). That might be 2.4 material.

-- 
MST

Reply via email to