Re: [Qemu-devel] [QEMU RFC 0/2] Spying on Memory to implement ethernet can_recieve()

2013-02-17 Thread Peter Crosthwaite
+Kuo-Jung Su

The same issue is present in the under review Faraday ethernet device model.

Regards,
Peter

On Mon, Jan 28, 2013 at 12:20 AM, Anthony Liguori anth...@codemonkey.ws wrote:
 Peter Crosthwaite peter.crosthwa...@xilinx.com writes:

 Hi All,

 Have a bit of a tricky question about ethernet controllers. We are 
 maintaining two ethernet controllers the cadence GEM and the Xilinx AXI 
 Ethernet both of which are scatter gather (SG) DMA capable. The issue comes 
 about when trying to impelement the can_recieve() function for each.

 For the sake of background, the ethernet can_recieve() function is
 used by net devices to signal to the net API that it cant consume
 data. As QEMU network interfaces are infinately faster than real
 hardware you need to implement this function to insulate against mass
 packet droppage. The can_recieve has no real hardware analog, the
 network devices would just drop the packets (but the packets would be
 comming at a much slower rate).

 For most devices, the return of this function is dependant on internal
 state of the device. However, for SGDMA, whether or not you can
 receive is dependent on the state of the SGDMA buffer descriptors,
 which are stored in RAM. So to properly implement can_receive() we
 need to spy on the contents of memory, to see if the next descriptor
 is valid. This is not too hard. The hard part is monitoring the
 descriptor for change after you have returned false from can recieve()
 so you can tell the net layer you are now ready. Essentially, you need
 to watch memory and wakup when the cpu writes to the magic
 address. Patch 1 is a hackish first attempt at this for the cadence
 GEM, using the Memory API.

 I don't think there's anything special here.  This comes down to: how
 can a device emulation provide flow control information to the backend?

 There are two ways this works today in QEMU.  Most devices have some
 sort of mailslot register that's notified when RX buffers are added to
 the descriptor ring.  From that, you call qemu_flush_queued_packets() to
 start the flow again.  virtio-net is an example of this.

 If there is truly no notification when adding descriptors, the only
 option is to use a timer to periodically poll the descriptor ring.  This
 is how our USB controllers work.

 Trying to write-protect memory is a losing battle.  You'll end up with a
 timer anyway to avoid constantly taking page faults.

 Regards,

 Anthony Liguori


 The alternative to this is to impelement timing for the ethernet device, 
 which is truer to the real hardware. Throttle the ethernet trasffic based on 
 wire speed, not the state of the ethernet controller. Ive had an attempt at 
 this for the AXI Ethernet (Patch 2) but a proper solution to this would be 
 to implement this wire side (in the net layer) rather than device side.

 Peter Crosthwaite (2):
   cadence_gem: Throttle traffic using buffer state
   xilinx_axienet: Model timing.

  hw/cadence_gem.c| 75 
 -
  hw/xilinx_axienet.c | 16 ++--
  2 files changed, 77 insertions(+), 14 deletions(-)

 --
 1.7.12.1.396.g16eed7c




Re: [Qemu-devel] [QEMU RFC 0/2] Spying on Memory to implement ethernet can_recieve()

2013-01-27 Thread Anthony Liguori
Peter Crosthwaite peter.crosthwa...@xilinx.com writes:

 Hi All,

 Have a bit of a tricky question about ethernet controllers. We are 
 maintaining two ethernet controllers the cadence GEM and the Xilinx AXI 
 Ethernet both of which are scatter gather (SG) DMA capable. The issue comes 
 about when trying to impelement the can_recieve() function for each.

 For the sake of background, the ethernet can_recieve() function is
 used by net devices to signal to the net API that it cant consume
 data. As QEMU network interfaces are infinately faster than real
 hardware you need to implement this function to insulate against mass
 packet droppage. The can_recieve has no real hardware analog, the
 network devices would just drop the packets (but the packets would be
 comming at a much slower rate).

 For most devices, the return of this function is dependant on internal
 state of the device. However, for SGDMA, whether or not you can
 receive is dependent on the state of the SGDMA buffer descriptors,
 which are stored in RAM. So to properly implement can_receive() we
 need to spy on the contents of memory, to see if the next descriptor
 is valid. This is not too hard. The hard part is monitoring the
 descriptor for change after you have returned false from can recieve()
 so you can tell the net layer you are now ready. Essentially, you need
 to watch memory and wakup when the cpu writes to the magic
 address. Patch 1 is a hackish first attempt at this for the cadence
 GEM, using the Memory API.

I don't think there's anything special here.  This comes down to: how
can a device emulation provide flow control information to the backend?

There are two ways this works today in QEMU.  Most devices have some
sort of mailslot register that's notified when RX buffers are added to
the descriptor ring.  From that, you call qemu_flush_queued_packets() to
start the flow again.  virtio-net is an example of this.

If there is truly no notification when adding descriptors, the only
option is to use a timer to periodically poll the descriptor ring.  This
is how our USB controllers work.

Trying to write-protect memory is a losing battle.  You'll end up with a
timer anyway to avoid constantly taking page faults.

Regards,

Anthony Liguori


 The alternative to this is to impelement timing for the ethernet device, 
 which is truer to the real hardware. Throttle the ethernet trasffic based on 
 wire speed, not the state of the ethernet controller. Ive had an attempt at 
 this for the AXI Ethernet (Patch 2) but a proper solution to this would be to 
 implement this wire side (in the net layer) rather than device side.

 Peter Crosthwaite (2):
   cadence_gem: Throttle traffic using buffer state
   xilinx_axienet: Model timing.

  hw/cadence_gem.c| 75 
 -
  hw/xilinx_axienet.c | 16 ++--
  2 files changed, 77 insertions(+), 14 deletions(-)

 -- 
 1.7.12.1.396.g16eed7c



[Qemu-devel] [QEMU RFC 0/2] Spying on Memory to implement ethernet can_recieve()

2013-01-26 Thread Peter Crosthwaite
Hi All,

Have a bit of a tricky question about ethernet controllers. We are maintaining 
two ethernet controllers the cadence GEM and the Xilinx AXI Ethernet both of 
which are scatter gather (SG) DMA capable. The issue comes about when trying to 
impelement the can_recieve() function for each.

For the sake of background, the ethernet can_recieve() function is used by net 
devices to signal to the net API that it cant consume data. As QEMU network 
interfaces are infinately faster than real hardware you need to implement this 
function to insulate against mass packet droppage. The can_recieve has no real 
hardware analog, the network devices would just drop the packets (but the 
packets would be comming at a much slower rate).

For most devices, the return of this function is dependant on internal state of 
the device. However, for SGDMA, whether or not you can receive is dependent on 
the state of the SGDMA buffer descriptors, which are stored in RAM. So to 
properly implement can_receive() we need to spy on the contents of memory, to 
see if the next descriptor is valid. This is not too hard. The hard part is 
monitoring the descriptor for change after you have returned false from can 
recieve() so you can tell the net layer you are now ready. Essentially, you 
need to watch memory and wakup when the cpu writes to the magic address. Patch 
1 is a hackish first attempt at this for the cadence GEM, using the Memory API.

The alternative to this is to impelement timing for the ethernet device, which 
is truer to the real hardware. Throttle the ethernet trasffic based on wire 
speed, not the state of the ethernet controller. Ive had an attempt at this for 
the AXI Ethernet (Patch 2) but a proper solution to this would be to implement 
this wire side (in the net layer) rather than device side.

Peter Crosthwaite (2):
  cadence_gem: Throttle traffic using buffer state
  xilinx_axienet: Model timing.

 hw/cadence_gem.c| 75 -
 hw/xilinx_axienet.c | 16 ++--
 2 files changed, 77 insertions(+), 14 deletions(-)

-- 
1.7.12.1.396.g16eed7c





Re: [Qemu-devel] [QEMU RFC 0/2] Spying on Memory to implement ethernet can_recieve()

2013-01-26 Thread Edgar E. Iglesias
On Sat, Jan 26, 2013 at 12:18:29PM -0800, Peter Crosthwaite wrote:
 Hi All,
 
 Have a bit of a tricky question about ethernet controllers. We are 
 maintaining two ethernet controllers the cadence GEM and the Xilinx AXI 
 Ethernet both of which are scatter gather (SG) DMA capable. The issue comes 
 about when trying to impelement the can_recieve() function for each.
 
 For the sake of background, the ethernet can_recieve() function is used by 
 net devices to signal to the net API that it cant consume data. As QEMU 
 network interfaces are infinately faster than real hardware you need to 
 implement this function to insulate against mass packet droppage. The 
 can_recieve has no real hardware analog, the network devices would just drop 
 the packets (but the packets would be comming at a much slower rate).
 
 For most devices, the return of this function is dependant on internal state 
 of the device. However, for SGDMA, whether or not you can receive is 
 dependent on the state of the SGDMA buffer descriptors, which are stored in 
 RAM. So to properly implement can_receive() we need to spy on the contents of 
 memory, to see if the next descriptor is valid. This is not too hard. The 
 hard part is monitoring the descriptor for change after you have returned 
 false from can recieve() so you can tell the net layer you are now ready. 
 Essentially, you need to watch memory and wakup when the cpu writes to the 
 magic address. Patch 1 is a hackish first attempt at this for the cadence 
 GEM, using the Memory API.
 

Hi Peter,

AFAIK, this flow control is typically part of the stream interface towards the 
DMA and based on internal state from the DMA.
For example, add something like can_push() to stream.c and have the axidma.c 
call that function when it's internal state changes wrt beeing able to receive 
data.
For xilinx_axidma.c, can_push() would propagate the running and idle state: 

can_push(stream_running(s)  !stream_idle(s));

from various points were this state changes.

I can imagine that there is hw without this kind of flow control but I don't 
think these MACs  DMA's are like that.

Cheers,
Edgar


 The alternative to this is to impelement timing for the ethernet device, 
 which is truer to the real hardware. Throttle the ethernet trasffic based on 
 wire speed, not the state of the ethernet controller. Ive had an attempt at 
 this for the AXI Ethernet (Patch 2) but a proper solution to this would be to 
 implement this wire side (in the net layer) rather than device side.
 
 Peter Crosthwaite (2):
   cadence_gem: Throttle traffic using buffer state
   xilinx_axienet: Model timing.
 
  hw/cadence_gem.c| 75 
 -
  hw/xilinx_axienet.c | 16 ++--
  2 files changed, 77 insertions(+), 14 deletions(-)
 
 -- 
 1.7.12.1.396.g16eed7c