On Wed, May 27, 2015 at 03:24:38PM +0800, Fam Zheng wrote:
> On Tue, 05/26 10:18, Stefan Hajnoczi wrote:
> > On Mon, May 25, 2015 at 11:51:23AM +0800, Fam Zheng wrote:
> > > On Tue, 05/19 15:54, Stefan Hajnoczi wrote:
> > > > On Tue, May 19, 2015 at 10:51:01AM +0000, Fam Zheng wrote:
> > > > > This callback is called by main loop before polling s->fd, if it 
> > > > > returns
> > > > > false, the fd will not be polled in this iteration.
> > > > > 
> > > > > This is redundant with checks inside read callback. After this patch,
> > > > > the data will be copied from s->fd to s->iov when it arrives. If the
> > > > > device can't receive, it will be queued to incoming_queue, and when 
> > > > > the
> > > > > device status changes, this queue will be flushed.
> > > > > 
> > > > > Also remove the qemu_can_send_packet() check in netmap_send. If it's
> > > > > true, we are good; if it's false, the qemu_sendv_packet_async would
> > > > > return 0 and read poll will be disabled until netmap_send_completed is
> > > > > called.
> > > > 
> > > > This causes unbounded memory usage in QEMU because
> > > > qemu_net_queue_append_iov() does not drop packets when sent_cb != NULL.
> > > 
> > > I think netmap_send will use "netmap_read_poll(s, false)" to stop 
> > > reading, only
> > > the first packet will be queued. Why is it unbounded?
> > 
> > I looked again and I agree with you.  It should stop after the first
> > packet and resume when the peer flushes the queue.
> > 
> 
> The other patches (socket and tap) have the same rationale. Are you happy with
> this appraoch?

Yes, but I have a question on the tap patch.

Attachment: pgpPldfY8PnxW.pgp
Description: PGP signature

Reply via email to