From: Andrew Gallatin [mailto:[EMAIL PROTECTED]
> Andrew Gallatin writes:
> 
>  > xmit routine was called 683441 times.  This means that the 
> queue was
>  > only a little over two packets deep on average, and vmstat 
> shows idle
>  > time.  I've tried piping additional packets to nghook mx0:orphans
>  > input, but that does not seem to increase the queue depth.
>  > 
> 
> The problem here seems to be that rather than just slapping the
> packets onto the driver's queue, ng_source passes the mbuf down
> to more of netgraph, where there is at least one spinlock,
> and the driver's ifq lock is taken and released a zillion times
> by ether_output_frame(), etc.
> 
> A quick hack (appended) to just slap the mbufs onto the if_snd queue
> gets me from ~410Kpps to 1020Kpps.  I also see very deep queues
> with this (because I'm slamming 4K pkts onto the queue at once..).
> 
> This is nearly identical to the linux pktgen figure on the same
> hardware, which makes me feel comfortable that there is a lot of
> headroom in the driver/firmware API and I'm not botching something
> in the FreeBSD driver.
> 
> BTW, did you see your 800Kpps on 4.x or 5.x?  If it was 4.x, what do
> you see on 5.x if you still have the same setup handy?
> 
> Thanks,

800Kpps was on 4.7. on a dual 2.8GHz Xeon with 100MHz PCI-X on
em. I will try the 5.3.

--don
_______________________________________________
[EMAIL PROTECTED] mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to