On Thu, 25 Nov 2010, Ivan Voras wrote:

On 11/25/10 05:54, Bruce Evans wrote:

Yes, this is normal. It is how ping -f doesn't work -- it doesn't do
anything
resembling flooding, except possibly accidentally when 1 Mbps ethernet was
fast. The ramping up is also accidental.
...

Thank you, that was a very informative explanation!

So in case I really do want to flood ping or in some other way test PPS throughput, are there any convenient tools for that?

netrate (netsend/netblast) is almost adequate.  I normally use a 1990ish
version of ttcp with iny modifications.  It is also almost adequate.
A problem with all of these is that that cannot do the right thing when
the send queue fills up.  send() should then return ENOBUFS.  Userland
then has the following bad options to handle this:
- loop calling send().  Takes 100% of 1 CPU and may consume time needed
  for productive i/o
- try using blocking mode and select(), and usually discover that select()
  doesn't work for for handling ENOBUFS.  Maybe it is the blocking mode that
  doesn't work.  I think the FreeBSD kernel can't easily tell if it should
  block because the driver would return ENOBUFS.  Userland has even less
  chance of telling if it should not try to send() because the i/o would
  either block or fail with ENOBUFS
- try to predict when send() will not block due to ENOBUFS, and wait until
  then non-busily using sleep().  Old ttcp does essentially this, and does
  it very badly -- it uses a very long hard-coded sleep time that may have
  worked when 1 Mbps ethernet was fast, but is now not so good.  Newer ttcps
  try harder using shorter sleeps, but still tend to fail due to the sleep
  granularity being large.  Send buffer queue lengths for 1 Gbps ethernet
  are typically 1024 packets, so the whole buffer can be drained in less than
  1 msec and the sleep granularity needs to be considerably less than 1 msec
  to keep up, but this is unreasonably short.  I avoid this problem using
  very large queue length (~10000, so that the sleeps for 10 msec in old ttcp
  can keep up (also depends on my NIC not draining as fast as theoretically
  possible, else I would need a queue length of 40000)).  netsend also uses
  sleeps to control its rate, and cannot work very well due to sleep
  granularity.

The limits for input are easier to test, once you can generate floods from
somewhere, since the ENOBUFS problem corresponds to saturation/dropping
packets/flow control, so when you hit it you know that you are at the
limit.

sam@ implemented or ported kttcp (kernel ttcp) which reduces at least
the network stack overheads for sending packets, so the limits are
closer to the driver and/or network.  I have't got around to trying
it.  I don't know how it handlers ENOBUFS.

Bruce
_______________________________________________
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Reply via email to