On Thu, Feb 22, 2007 at 08:26:22PM +0000, Toby Douglass wrote:
<snip>
> >Likely what the NIC can push out and what the link can
> >handle is different, so trying to cater to the output buffer is a futile
> >excercise, as far as I can see.
> 
> So, the final implication is that users will ONLY ever perform serial 
> writes on a socket, no matter what it is?

I'm not following.

What I was trying to say is that I suspect the NIC will push packets onto
the network faster than the network can handle. Meaning, trying not to
overflow the local buffer doesn't buy you anything.

Example: If the NIC is on a 100BaseT, but the gateway is on a T3, even if
you didn't overflow the UDP stack locally, you'd still get tons of packet
loss at the gateway, because it couldn't possibly forward them all onto the
T3. In other words, trying to limit your send rate by using the local buffer
as a throttle doesn't make sense; the local buffer is almost never the
bottleneck, unless you've still got a USR Sportster on a serial cable ;)

So, at the end of the day, to send UDP packets all you need to do is call
send() or sendto(). No IOCP, no polling, no nothing. Everything goes out as
discrete units, so you don't need anything worry about concurrent thread
access or anything like that, either. You're socket writes will never get
interleaved, since that's simply not the way UDP works at any level
(protocol, implementation, or API).

_______________________________________________
Libevent-users mailing list
Libevent-users@monkey.org
http://monkey.org/mailman/listinfo/libevent-users

Reply via email to